Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
@article{Fan2001VariableSV, title={Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties}, author={Jianqing Fan and Runze Li}, journal={Journal of the American Statistical Association}, year={2001}, volume={96}, pages={1348 - 1360} }
Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized likelihood approaches are proposed to handle these kinds of problems. The proposed methods select variables and estimate coefficients simultaneously. Hence they enable us to construct confidence…
7,149 Citations
Variable Selection using MM Algorithms.
- Computer ScienceAnnals of statistics
- 2005
This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions and proves that when these MM algorithms converge, they must converge to a desirable point.
Discussion: One-step sparse estimates in nonconcave penalized likelihood models
- Computer Science
- 2008
Nonconcave penalized likelihood methods are still commonly viewed as computationally limited and poorly understood, especially when the number of variables exceeds thenumber of data points, and are related to Fan and Li's work through discussions on continuity, computational strategies, selection consistency and oracle efficiency.
PENALIZED VARIABLE SELECTION PROCEDURE FOR COX MODELS WITH SEMIPARAMETRIC RELATIVE RISK.
- MathematicsAnnals of statistics
- 2010
A penalized partial likelihood procedure is proposed to simultaneously estimate the parameters and select variables for both the parametric and the nonparametric parts of the Cox models with semiparametric relative risk, and it is shown that the resulting estimator of theparametric part possesses the oracle property, and that the estimators achieves the optimal rate of convergence.
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.
- MathematicsStatistica Sinica
- 2011
This paper proposes a model selection procedure for nonparametric models, and explores the conditions under which the new method enjoys the aforementioned properties, and demonstrates that the new approach substantially outperforms other existing methods in the finite sample setting.
Robust Variable Selection With Exponential Squared Loss
- Mathematics, Computer ScienceJournal of the American Statistical Association
- 2013
This article proposes a class of penalized robust regression estimators based on exponential squared loss that can achieve the highest asymptotic breakdown point of 1/2 and shows that their influence functions are bounded with respect to the outliers in either the response or the covariate domain.
Nonconcave penalized likelihood with a diverging number of parameters
- Mathematics
- 2004
A class of variable selection procedures for parametric models via nonconcave penalized likelihood was proposed by Fan and Li to simultaneously estimate parameters and select important variables.…
Variable Selection and Empirical Likelihood based Inferenc for Measurement Error Data 1
- Mathematics
- 2006
Using nonconvex penalized least squares, we propose a class of variable selection procedures for linear models and partially linear models when the covariates are measured with additive error. The…
Penalized robust estimators in logistic regression with applications to sparse models.
- Computer Science, Mathematics
- 2019
A family of penalized weighted weighted $M-$type estimators for the logistic regression parameter that are stable against atypical data are introduced and the so--called Sign penalization is introduced.
Automatic model selection for partially linear models
- MathematicsJ. Multivar. Anal.
- 2009
Tuning parameter selection in penalized generalized linear models for discrete data
- Mathematics
- 2014
In recent years, we have seen an increased interest in the penalized likelihood methodology, which can be efficiently used for shrinkage and selection purposes. This strategy can also result in…
References
SHOWING 1-10 OF 40 REFERENCES
Regularization of Wavelet Approximations
- Mathematics
- 2001
In this paper, we introduce nonlinear regularized wavelet estimators for estimating nonparametric regression functions when sampling points are not uniformly spaced. The approach can apply readily to…
Regression Shrinkage and Selection via the Lasso
- Computer Science
- 1996
A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Polynomial splines and their tensor products in extended linear modeling: 1994 Wald memorial lecture
- Mathematics
- 1997
Analysis of variance type models are considered for a regression function or for the logarithm of a probability function, conditional probability function, density function, conditional density…
Wavelets in statistics: A review
- Mathematics
- 1997
The field of nonparametric function estimation has broadened its appeal in recent years with an array of new tools for statistical analysis. In particular, theoretical and applied research on the…
The lasso method for variable selection in the Cox model.
- Computer ScienceStatistics in medicine
- 1997
Simulations indicate that the lasso can be more accurate than stepwise selection in this setting and reduce the estimation variance while providing an interpretable final model in Cox's proportional hazards model.
Ideal spatial adaptation by wavelet shrinkage
- Mathematics
- 1994
SUMMARY With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline,…
Penalized Regressions: The Bridge versus the Lasso
- Computer Science
- 1998
It is shown that the bridge regression performs well compared to the lasso and ridge regression, and is demonstrated through an analysis of a prostate cancer data.
Heuristics of instability and stabilization in model selection
- Mathematics
- 1996
In model selection, usually a best predictor is chosen from a collection {μ(.,s)} of predictors where μ(.,s) is the minimum least-squares predictor in a collection U s of predictors. Here s is a…
Minimax risk overlp-balls forlp-error
- Mathematics
- 1994
SummaryConsider estimating the mean vector θ from dataNn(θ,σ2I) withlq norm loss,q≧1, when θ is known to lie in ann-dimensionallp ball,p∈(0, ∞). For largen, the ratio of minimaxlinear risk to minimax…
Smoothing noisy data with spline functions
- Mathematics
- 1978
SummarySmoothing splines are well known to provide nice curves which smooth discrete, noisy data. We obtain a practical, effective method for estimating the optimum amount of smoothing from the data.…