• Corpus ID: 88511777

Estimation and inference for high-dimensional non-sparse models

@article{Lin2011EstimationAI,
  title={Estimation and inference for high-dimensional non-sparse models},
  author={Lu Lin and Lixing Zhu and Yujie Gai},
  journal={arXiv: Methodology},
  year={2011}
}
To successfully work on variable selection, sparse model structure has become a basic assumption for all existing methods. However, this assumption is questionable as it is hard to hold in most of cases and none of existing methods may provide consistent estimation and accurate model prediction in nons-parse scenarios. In this paper, we propose semiparametric re-modeling and inference when the linear regression model under study is possibly non-sparse. After an initial working model is selected… 

Tables from this paper

References

SHOWING 1-10 OF 32 REFERENCES
On Model Selection Consistency of Lasso
TLDR
It is proved that a single condition, which is called the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large.
Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
TLDR
In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.
The sparsity and bias of the Lasso selection in high-dimensional linear regression
Meinshausen and Buhlmann [Ann. Statist. 34 (2006) 1436-1462] showed that, for neighborhood selection in Gaussian graphical models, under a neighborhood stability condition, the LASSO is consistent,
PROFILE-KERNEL LIKELIHOOD INFERENCE WITH DIVERGING NUMBER OF PARAMETERS.
TLDR
A new algorithm, called the accelerated profile-kernel algorithm, for computing profile- kernel estimator is proposed and investigated and Wilk's phenomenon is demonstrated.
Asymptotic inference for high-dimensional data
In this paper, we study inference for high-dimensional data characterized by small sample sizes relative to the dimension of the data. In particular, we provide an infinite-dimensional framework to
Nonconcave penalized likelihood with a diverging number of parameters
A class of variable selection procedures for parametric models via nonconcave penalized likelihood was proposed by Fan and Li to simultaneously estimate parameters and select important variables.
Asymptotic properties of bridge estimators in sparse high-dimensional regression models
We study the asymptotic properties of bridge estimators in sparse, high-dimensional, linear regression models when the number of covariates may increase to infinity with the sample size. We are
The Adaptive Lasso and Its Oracle Properties
TLDR
A new version of the lasso is proposed, called the adaptive lasso, where adaptive weights are used for penalizing different coefficients in the ℓ1 penalty, and the nonnegative garotte is shown to be consistent for variable selection.
Semilinear High-Dimensional Model for Normalization of Microarray Data
Normalization of microarray data is essential for removing experimental biases and revealing meaningful biological results. Motivated by a problem of normalizing microarray data, a semilinear
...
1
2
3
4
...