SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR

@article{Bickel2008SIMULTANEOUSAO,
  title={SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR},
  author={Peter J. Bickel and Yaacov Ritov and Alexandre B. Tsybakov},
  journal={Annals of Statistics},
  year={2008},
  volume={37},
  pages={1705-1732}
}
We show that, under a sparsity scenario, the Lasso estimator and the Dantzig selector exhibit similar behavior. For both methods, we derive, in parallel, oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the l p estimation loss for 1 ≤ p ≤ 2 in the linear model when the number of variables can be much larger than the sample size. 

ORACLE INEQUALITIES FOR LASSO AND DANTZIG SELECTOR IN HIGH-DIMENSIONAL LINEAR REGRESSION

This paper takes the restricted eigenvalue conditions, compatibility condition and UDP condition for examples to show oracle inequalities about lasso and Dantzig selector for high-dimensional linear regression.

Rate Minimaxity of the Lasso and Dantzig Estimators

Under certain conditions on the design matrix and penalty level, it is proved that these minimax convergence rates are attained by both the Lasso and Dantzig estimators.

Simultaneous Lasso and Dantzig Selector in High Dimensional Nonparametric Regression

The assumptions equivalent withAssumption RE and Assumption RE are given and more precise oracle inequalities for the prediction risk in the general nonparametric regression model and bounds on the L_p estimation loss in the linear model are derived when the number of variables can be much larger than the sample size.

Some theoretical results on the Grouped Variables Lasso

It is proved that the Group Lasso estimator satisfies a sparsity inequality, i.e., a bound in terms of the number of non-zero components of the oracle regression vector, which is better, in some cases, than the one achieved by the Lasso and the Dantzig selector.

On the asymptotic properties of the group lasso estimator for linear models

We establish estimation and model selection consistency, pre- diction and estimation boundsand persistencefor the group-lassoestimator and model selectorproposed by Yuan and Lin (2006) for least

Greedy Variance Estimation for the LASSO

This work proposes an efficient estimator for the noise variance in high dimensional linear regression that is faster than LASSO, only requiring p matrix–vector multiplications, and proves this estimator is consistent with a good rate of convergence.

Greedy Variance Estimation for the LASSO

This work proposes an efficient estimator for the noise variance in high dimensional linear regression that is faster than LASSO, only requiring p matrix–vector multiplications, and proves this estimator is consistent with a good rate of convergence.

An ℓ1-oracle inequality for the Lasso in multivariate finite mixture of multivariate Gaussian regression models

We consider a multivariate finite mixture of Gaussian regression models for high-dimensional data, where the number of covariates and the size of the response may be much larger than the sample size.

On the Asymptotic Properties of The Group Lasso Estimator in Least Squares Problems

We derive conditions guaranteeing estimation and model selection consistency, oracle properties and persistence for the group-lasso estimator and model selector proposed by Yuan and Lin (2006) for

Variable Selection with Exponential Weights and $l_0$-Penalization

In the context of a linear model with a sparse coefficient vector, exponential weights methods have been shown to be achieve oracle inequalities for prediction. We show that such methods also succeed
...

References

SHOWING 1-10 OF 39 REFERENCES

Asymptotics for lasso-type estimators

We consider the asymptotic behavior of regression estimators that minimize the residual sum of squares plus a penalty proportional to Σ ∥β j ∥γ for some y > 0. These estimators include the Lasso as a

Sparsity oracle inequalities for the Lasso

It is shown that the penalized least squares estimator satisfies sparsity oracle inequalities, i.e., bounds in terms of the number of non-zero components of the oracle vector, in nonparametric regression setting with random design.

Discussion: The Dantzig selector: Statistical estimation when p is much larger than n

The conditions of this paper using the Dantzig selector and those of Bunea, Tsybakov and Wegkamp using the Lasso are presented together, since these authors emphasize different points and use different normalizations.

HIGH-DIMENSIONAL GENERALIZED LINEAR MODELS AND THE LASSO

  • S. Geer
  • Computer Science, Mathematics
  • 2008
A nonasymptotic oracle inequality is proved for the empirical risk minimizer with Lasso penalty for high-dimensional generalized linear models with Lipschitz loss functions, and the penalty is based on the coefficients in the linear predictor, after normalization with the empirical norm.

Sparse Density Estimation with l1 Penalties

It is shown that the penalized least squares estimator satisfies sparsity oracle inequalities, i.e., bounds in terms of the number of non-zero components of the oracle vector, even when the dimension of the model is (much) larger than the sample size.

The Dantzig selector: Statistical estimation when P is much larger than n

Is it possible to estimate β reliably based on the noisy data y?

LASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS FOR HIGH-DIMENSIONAL DATA

Even though the Lasso cannot recover the correct sparsity pattern, the estimator is still consistent in the ‘2-norm sense for fixed designs under conditions on (a) the number sn of non-zero components of the vector n and (b) the minimal singular values of the design matrices that are induced by selecting of order sn variables.

Aggregation and Sparsity Via l1 Penalized Least Squares

This paper shows that near optimal rates of aggregation and adaptation to unknown sparsity can be simultaneously achieved via l1 penalized least squares in a nonparametric regression setting. The

On the LASSO and its Dual

Consideration of the primal and dual problems together leads to important new insights into the characteristics of the LASSO estimator and to an improved method for estimating its covariance matrix.

The sparsity and bias of the Lasso selection in high-dimensional linear regression

Meinshausen and Buhlmann [Ann. Statist. 34 (2006) 1436-1462] showed that, for neighborhood selection in Gaussian graphical models, under a neighborhood stability condition, the LASSO is consistent,