We prove that a single condition, which we call the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting.Expand

We provide a unified framework for establishing consistency and convergence rates for such regularized M-estimators under high-dimensional scaling, and show how it can be used to re-derive several existing results.Expand

Given i.i.d. observations of a random vector X 2 R p , we study the problem of estimating both its covariance matrix � ∗ , and its inverse covariance or concentration matrix � ∗ = (� ∗ ) −1 . We… Expand

Networks or graphs can easily represent a diverse set of data sources that are characterized by interacting units or actors. Social ne tworks, representing people who communicate with each other, are… Expand

The Lasso [28] is an attractive technique for regularization and variable selection for high-dimensional data, where the number of predictor variables p is potentially much larger than the number of… Expand

Bagging is one of the most effective computationally intensive procedures to improve on unstable estimators or classifiers, useful especially for high dimensional data set problems. Here we formalize… Expand

We develop a general framework for proving rigorous guarantees on the performance of the EM algorithm and a variant known as gradient EM when applied to a finite set of samples.Expand