A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers

Abstract

High-dimensional statistical inference deals with models in which the the number of parameters p is comparable to or larger than the sample size n. Since it is usually impossible to obtain consistent procedures unless p/n → 0, a line of recent work has studied models with various types of low-dimensional structure, including sparse vectors, sparse and structured matrices, low-rank matrices, and combinations thereof. In such settings, a general approach to estimation is to solve a regularized optimization problem, which combines a loss function measuring how well the model fits the data with some regularization function that encourages the assumed structure. This paper provides a unified framework for establishing consistency and convergence rates for such regularized M -estimators under high-dimensional scaling. We state one main theorem and show how it can be used to re-derive some existing results, and also to obtain a number of new results on consistency and convergence rates, in both l2-error and related norms. Our analysis also identifies two key properties of loss and regularization functions, referred to as restricted strong convexity and decomposability, that ensure corresponding regularized M -estimators have fast convergence rates, and which are optimal in many well-studied cases.

DOI: 10.1214/12-STS400
View Slides

Extracted Key Phrases

3 Figures and Tables

02040608020072008200920102011201220132014201520162017
Citations per Year

395 Citations

Semantic Scholar estimates that this publication has 395 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Negahban2009AUF, title={A unified framework for high-dimensional analysis of \$M\$-estimators with decomposable regularizers}, author={Sahand Negahban and Pradeep Ravikumar and Martin J. Wainwright and Bin Yu}, booktitle={NIPS}, year={2009} }