Estimation and inference for high-dimensional non-sparse models
@article{Lin2011EstimationAI, title={Estimation and inference for high-dimensional non-sparse models}, author={Lu Lin and Lixing Zhu and Yujie Gai}, journal={arXiv: Methodology}, year={2011} }
To successfully work on variable selection, sparse model structure has become a basic assumption for all existing methods. However, this assumption is questionable as it is hard to hold in most of cases and none of existing methods may provide consistent estimation and accurate model prediction in nons-parse scenarios. In this paper, we propose semiparametric re-modeling and inference when the linear regression model under study is possibly non-sparse. After an initial working model is selected…
References
SHOWING 1-10 OF 32 REFERENCES
Empirical likelihood for a varying coefficient partially linear model with diverging number of parameters
- MathematicsJ. Multivar. Anal.
- 2012
On Model Selection Consistency of Lasso
- Computer ScienceJ. Mach. Learn. Res.
- 2006
It is proved that a single condition, which is called the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large.
Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Mathematics, Computer Science
- 2001
In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.
The sparsity and bias of the Lasso selection in high-dimensional linear regression
- Mathematics
- 2008
Meinshausen and Buhlmann [Ann. Statist. 34 (2006) 1436-1462] showed that, for neighborhood selection in Gaussian graphical models, under a neighborhood stability condition, the LASSO is consistent,…
PROFILE-KERNEL LIKELIHOOD INFERENCE WITH DIVERGING NUMBER OF PARAMETERS.
- Computer ScienceAnnals of statistics
- 2008
A new algorithm, called the accelerated profile-kernel algorithm, for computing profile- kernel estimator is proposed and investigated and Wilk's phenomenon is demonstrated.
Asymptotic inference for high-dimensional data
- Mathematics
- 2010
In this paper, we study inference for high-dimensional data characterized by small sample sizes relative to the dimension of the data. In particular, we provide an infinite-dimensional framework to…
Nonconcave penalized likelihood with a diverging number of parameters
- Mathematics
- 2004
A class of variable selection procedures for parametric models via nonconcave penalized likelihood was proposed by Fan and Li to simultaneously estimate parameters and select important variables.…
Asymptotic properties of bridge estimators in sparse high-dimensional regression models
- Mathematics
- 2008
We study the asymptotic properties of bridge estimators in sparse, high-dimensional, linear regression models when the number of covariates may increase to infinity with the sample size. We are…
The Adaptive Lasso and Its Oracle Properties
- Computer Science
- 2006
A new version of the lasso is proposed, called the adaptive lasso, where adaptive weights are used for penalizing different coefficients in the ℓ1 penalty, and the nonnegative garotte is shown to be consistent for variable selection.
Semilinear High-Dimensional Model for Normalization of Microarray Data
- Mathematics
- 2005
Normalization of microarray data is essential for removing experimental biases and revealing meaningful biological results. Motivated by a problem of normalizing microarray data, a semilinear…