• Publications
  • Influence
Graphical Models, Exponential Families, and Variational Inference
TLDR
The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models. Expand
Network Coding for Distributed Storage Systems
TLDR
This paper shows how to optimally generate MDS fragments directly from existing fragments in the system, and introduces a new scheme called regenerating codes which use slightly larger fragments than MDS but have lower overall bandwidth use. Expand
Image denoising using scale mixtures of Gaussians in the wavelet domain
TLDR
The performance of this method for removing noise from digital images substantially surpasses that of previously published methods, both visually and in terms of mean squared error. Expand
A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers
TLDR
A unified framework for establishing consistency and convergence rates for regularized M-estimators under high-dimensional scaling is provided and one main theorem is state and shown how it can be used to re-derive several existing results, and also to obtain several new results. Expand
Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$ -Constrained Quadratic Programming (Lasso)
  • M. Wainwright
  • Mathematics, Computer Science
  • IEEE Transactions on Information Theory
  • 1 May 2009
TLDR
This work analyzes the behavior of l1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern of a vector beta* based on observations contaminated by noise, and establishes precise conditions on the problem dimension p, the number k of nonzero elements in beta*, and the number of observations n. Expand
High-Dimensional Statistics: A Non-Asymptotic Viewpoint
TLDR
This book provides a self-contained introduction to the area of high-dimensional statistics, aimed at the first-year graduate level, and includes chapters that are focused on core methodology and theory - including tail bounds, concentration inequalities, uniform laws and empirical process, and random matrices. Expand
High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence
Given i.i.d. observations of a random vector X 2 R p , we study the problem of estimating both its covariance matrix � ∗ , and its inverse covariance or concentration matrix � ∗ = (� ∗ ) −1 . WeExpand
Statistical Learning with Sparsity: The Lasso and Generalizations
TLDR
Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data and extract useful and reproducible patterns from big datasets. Expand
Using linear programming to Decode Binary linear codes
TLDR
The definition of a pseudocodeword unifies other such notions known for iterative algorithms, including "stopping sets," "irreducible closed walks," "trellis cycles," "deviation sets," and "graph covers," which is a lower bound on the classical distance. Expand
Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
TLDR
This work develops and analyze distributed algorithms based on dual subgradient averaging and provides sharp bounds on their convergence rates as a function of the network size and topology, and shows that the number of iterations required by the algorithm scales inversely in the spectral gap of thenetwork. Expand
...
1
2
3
4
5
...