• Publications
  • Influence
A frequentist approach to computer model calibration
TLDR
We propose a new and identifiable parameterization of the computer model calibration problem and provide a general frequentist solution. Expand
  • 40
  • 9
  • PDF
On the Dynamics of Gradient Descent for Autoencoders
We provide a series of results for unsupervised learning with autoencoders. Specifically, we study shallow two-layer autoencoder architectures with shared weights. We focus on three generativeExpand
  • 6
  • 3
  • PDF
FIBER DIRECTION ESTIMATION, SMOOTHING AND TRACKING IN DIFFUSION MRI.
TLDR
This paper proposes a new method to identify and estimate multiple diffusion directions within a voxel through a new and identifiable parametrization of the widely used multi-tensor model which greatly improves direction estimation in regions with crossing fibers. Expand
  • 8
  • 3
  • PDF
Robust Estimation for Generalized Additive Models
This article studies M-type estimators for fitting robust generalized additive models in the presence of anomalous data. A new theoretical construct is developed to connect the costly M-typeExpand
  • 18
  • 2
  • PDF
A Full Bayesian Approach for Boolean Genetic Network Inference
TLDR
We propose a full Bayesian approach to infer Boolean genetic networks from observed data. Expand
  • 7
  • 1
  • PDF
Nonparametric Cepstrum Estimation via Optimal Risk Smoothing
TLDR
This paper proposes a new cepstrum estimation procedure that is capable of producing smoother and improved cephstrum estimates without the use of any parametric modeling. Expand
  • 3
  • 1
  • PDF
Locally linear embedding with additive noise
TLDR
We present a modification of locally linear embedding with additive noise, which has been seen to perform better in the presence of noise distortion by exploiting the relationship between local linearity and reconstruction potential. Expand
  • 4
  • 1
Matrix Completion with Noisy Entries and Outliers
TLDR
This paper considers the problem of matrix completion when the observed entries are noisy and contain outliers. Expand
  • 13
  • PDF
A Provable Approach for Double-Sparse Coding
TLDR
In this paper, we consider the double-sparsity model introduced by Rubinstein, Zibulevsky, and Elad where the dictionary itself is the product of a fixed, known basis and a data-adaptive sparse component. Expand
  • 6
  • PDF
Autoencoders Learn Generative Linear Models
TLDR
We provide a series of results for unsupervised learning with autoencoders. Expand
  • 4
  • PDF