# High dimensional regression and matrix estimation without tuning parameters

@article{Chatterjee2015HighDR, title={High dimensional regression and matrix estimation without tuning parameters}, author={Sourav Chatterjee}, journal={arXiv: Statistics Theory}, year={2015} }

A general theory for Gaussian mean estimation that automatically adapts to unknown sparsity under arbitrary norms is proposed. The theory is applied to produce adaptively minimax rate-optimal estimators in high dimensional regression and matrix estimation that involve no tuning parameters.

## 5 Citations

### On cross-validated Lasso in high dimensions

- Computer Science, MathematicsThe Annals of Statistics
- 2021

This paper derives non-asymptotic error bounds for the Lasso estimator when the penalty parameter for the estimator is chosen using $K$-fold cross-validation and serves as a justification for the widely spread practice of using cross- validation as a method to choose the Penalty parameter.

### On cross-validated Lasso

- Computer Science, Mathematics
- 2016

In the model with Gaussian noise and under fairly general assumptions on the candidate set of values of ?

### High-Dimensional Regression Under Correlated Design: An Extensive Simulation Study

- Computer ScienceContributions to Statistics
- 2019

The relative performances of some of these methods for parameter estimation and variable selection through analyzing real and synthetic data sets are investigated and the relative performance of proposed methods under correlated design matrix is compared.

### Minimal penalties and the slope heuristics: a survey

- Computer Science
- 2019

The theoretical results obtained for minimal-penalty algorithms are reviewed, with a self-contained proof in the simplest framework, precise proof ideas for further generalizations, and a few new results.

### New Risk Bounds for 2D Total Variation Denoising

- Computer Science, MathematicsIEEE Transactions on Information Theory
- 2021

This paper rigorously shows that, when the truth is piecewise constant with few pieces, the ideally tuned TVD estimator performs better than in the worst case.

## 113 References

### HIGH-DIMENSIONAL GENERALIZED LINEAR MODELS AND THE LASSO

- Computer Science, Mathematics
- 2008

A nonasymptotic oracle inequality is proved for the empirical risk minimizer with Lasso penalty for high-dimensional generalized linear models with Lipschitz loss functions, and the penalty is based on the coefficients in the linear predictor, after normalization with the empirical norm.

### Sparse Estimation by Exponential Weighting

- Computer Science
- 2012

An efficient implementation of the sparsity pattern aggregation principle is described that compares favorably to state-of-the-art procedures on some basic numerical examples and allows for sparsity oracle in- equalities in several popular frameworks including ordinary sparsity, fused sparsity and group sparsity.

### Asymptotics for lasso-type estimators

- Mathematics
- 2000

We consider the asymptotic behavior of regression estimators that minimize the residual sum of squares plus a penalty proportional to Σ ∥β j ∥γ for some y > 0. These estimators include the Lasso as a…

### Estimation of (near) low-rank matrices with noise and high-dimensional scaling

- Computer ScienceICML
- 2010

Simulations show excellent agreement with the high-dimensional scaling of the error predicted by the theory, and illustrate their consequences for a number of specific learning models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes, and recovery of low- rank matrices from random projections.

### Matrix estimation by Universal Singular Value Thresholding

- Computer Science
- 2015

This paper introduces a simple estimation procedure, called Universal Singular Value Thresholding (USVT), that works for any matrix that has "a little bit of structure" and achieves the minimax error rate up to a constant factor.

### Estimation of high-dimensional low-rank matrices

- Mathematics, Computer Science
- 2010

This work investigates penalized least squares estimators with a Schatten-p quasi-norm penalty term and derives bounds for the kth entropy numbers of the quasi-convex Schatten class embeddings S M p → S M 2 , p < 1, which are of independent interest.

### Sparsity oracle inequalities for the Lasso

- Mathematics, Computer Science
- 2007

It is shown that the penalized least squares estimator satisfies sparsity oracle inequalities, i.e., bounds in terms of the number of non-zero components of the oracle vector, in nonparametric regression setting with random design.

### MODEL SELECTION FOR NONPARAMETRIC REGRESSION

- Computer Science, Mathematics
- 1997

A model complexity penalty term in AIC is incorporated to handle selec- tion bias and resulting estimators are shown to achieve a trade-off among approxima- tion error, estimation error and model complexity without prior knowledge about the true regression function.

### Concentration inequalities and model selection

- Mathematics
- 2007

Exponential and Information Inequalities.- Gaussian Processes.- Gaussian Model Selection.- Concentration Inequalities.- Maximal Inequalities.- Density Estimation via Model Selection.- Statistical…

### Unbiased Risk Estimates for Singular Value Thresholding and Spectral Estimators

- MathematicsIEEE Transactions on Signal Processing
- 2013

The utility of the unbiased risk estimation for SVT-based denoising of real clinical cardiac MRI series data is demonstrated and an unbiased risk estimate formula for singular value thresholding (SVT), a popular estimation strategy that applies a soft-thresholding rule to the singular values of the noisy observations is given.