• Corpus ID: 88511768

Adaptive post-Dantzig estimation and prediction for non-sparse "large $p$ and small $n$" models

@article{Lin2010AdaptivePE,
  title={Adaptive post-Dantzig estimation and prediction for non-sparse "large \$p\$ and small \$n\$" models},
  author={Lu Lin and Lixing Zhu and Yujie Gai},
  journal={arXiv: Methodology},
  year={2010}
}
For consistency (even oracle properties) of estimation and model prediction, almost all existing methods of variable/feature selection critically depend on sparsity of models. However, for ``large $p$ and small $n$" models sparsity assumption is hard to check and particularly, when this assumption is violated, the consistency of all existing estimations is usually impossible because working models selected by existing methods such as the LASSO and the Dantzig selector are usually biased. To… 
1 Citations

Tables from this paper

References

SHOWING 1-10 OF 27 REFERENCES
On Model Selection Consistency of Lasso
TLDR
It is proved that a single condition, which is called the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large.
A generalized Dantzig selector with shrinkage tuning
The Dantzig selector performs variable selection and model fitting in linear regression. It uses an L 1 penalty to shrink the regression coefficients towards zero, in a similar fashion to the lasso.
Nonconcave penalized likelihood with a diverging number of parameters
A class of variable selection procedures for parametric models via nonconcave penalized likelihood was proposed by Fan and Li to simultaneously estimate parameters and select important variables.
DASSO: connections between the Dantzig selector and lasso
Summary.  We propose a new algorithm, DASSO, for fitting the entire coefficient path of the Dantzig selector with a similar computational cost to the least angle regression algorithm that is used to
PROFILE-KERNEL LIKELIHOOD INFERENCE WITH DIVERGING NUMBER OF PARAMETERS.
TLDR
A new algorithm, called the accelerated profile-kernel algorithm, for computing profile- kernel estimator is proposed and investigated and Wilk's phenomenon is demonstrated.
Sure independence screening for ultrahigh dimensional feature space
TLDR
The concept of sure screening is introduced and a sure screening method that is based on correlation learning, called sure independence screening, is proposed to reduce dimensionality from high to a moderate scale that is below the sample size.
The Dantzig selector: Statistical estimation when P is much larger than n
In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y=Xβ+z, where β∈Rp is a
Asymptotic properties of bridge estimators in sparse high-dimensional regression models
We study the asymptotic properties of bridge estimators in sparse, high-dimensional, linear regression models when the number of covariates may increase to infinity with the sample size. We are
Marginal asymptotics for the “large $p$, small $n$” paradigm: With applications to microarray data
The "large p, small n" paradigm arises in microarray studies, image analysis, high throughput molecular screening, astronomy, and in many other high dimensional applications. False discovery rate
...
1
2
3
...