# Adaptive post-Dantzig estimation and prediction for non-sparse "large $p$ and small $n$" models

@article{Lin2010AdaptivePE, title={Adaptive post-Dantzig estimation and prediction for non-sparse "large \$p\$ and small \$n\$" models}, author={Lu Lin and Lixing Zhu and Yujie Gai}, journal={arXiv: Methodology}, year={2010} }

For consistency (even oracle properties) of estimation and model prediction, almost all existing methods of variable/feature selection critically depend on sparsity of models. However, for ``large $p$ and small $n$" models sparsity assumption is hard to check and particularly, when this assumption is violated, the consistency of all existing estimations is usually impossible because working models selected by existing methods such as the LASSO and the Dantzig selector are usually biased. To…

## One Citation

Inference for biased models: A quasi-instrumental variable approach

- PsychologyJ. Multivar. Anal.
- 2016

## References

SHOWING 1-10 OF 27 REFERENCES

On Model Selection Consistency of Lasso

- Computer ScienceJ. Mach. Learn. Res.
- 2006

It is proved that a single condition, which is called the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large.

Empirical likelihood for a varying coefficient partially linear model with diverging number of parameters

- MathematicsJ. Multivar. Anal.
- 2012

A generalized Dantzig selector with shrinkage tuning

- Economics
- 2009

The Dantzig selector performs variable selection and model fitting in linear regression. It uses an L 1 penalty to shrink the regression coefficients towards zero, in a similar fashion to the lasso.…

Nonconcave penalized likelihood with a diverging number of parameters

- Mathematics
- 2004

A class of variable selection procedures for parametric models via nonconcave penalized likelihood was proposed by Fan and Li to simultaneously estimate parameters and select important variables.…

DASSO: connections between the Dantzig selector and lasso

- Computer Science
- 2009

Summary. We propose a new algorithm, DASSO, for fitting the entire coefficient path of the Dantzig selector with a similar computational cost to the least angle regression algorithm that is used to…

PROFILE-KERNEL LIKELIHOOD INFERENCE WITH DIVERGING NUMBER OF PARAMETERS.

- Computer ScienceAnnals of statistics
- 2008

A new algorithm, called the accelerated profile-kernel algorithm, for computing profile- kernel estimator is proposed and investigated and Wilk's phenomenon is demonstrated.

Sure independence screening for ultrahigh dimensional feature space

- Computer Science
- 2006

The concept of sure screening is introduced and a sure screening method that is based on correlation learning, called sure independence screening, is proposed to reduce dimensionality from high to a moderate scale that is below the sample size.

The Dantzig selector: Statistical estimation when P is much larger than n

- Computer Science
- 2007

In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y=Xβ+z, where β∈Rp is a…

Asymptotic properties of bridge estimators in sparse high-dimensional regression models

- Mathematics
- 2008

We study the asymptotic properties of bridge estimators in sparse, high-dimensional, linear regression models when the number of covariates may increase to infinity with the sample size. We are…

Marginal asymptotics for the “large $p$, small $n$” paradigm: With applications to microarray data

- Mathematics
- 2005

The "large p, small n" paradigm arises in microarray studies, image analysis, high throughput molecular screening, astronomy, and in many other high dimensional applications. False discovery rate…