Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

@article{Fan2001VariableSV,
  title={Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties},
  author={Jianqing Fan and Runze Li},
  journal={Journal of the American Statistical Association},
  year={2001},
  volume={96},
  pages={1348 - 1360}
}
  • Jianqing Fan, Runze Li
  • Published 1 December 2001
  • Mathematics, Computer Science
  • Journal of the American Statistical Association
Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized likelihood approaches are proposed to handle these kinds of problems. The proposed methods select variables and estimate coefficients simultaneously. Hence they enable us to construct confidence… 
Variable Selection using MM Algorithms.
TLDR
This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions and proves that when these MM algorithms converge, they must converge to a desirable point.
Discussion: One-step sparse estimates in nonconcave penalized likelihood models
TLDR
Nonconcave penalized likelihood methods are still commonly viewed as computationally limited and poorly understood, especially when the number of variables exceeds thenumber of data points, and are related to Fan and Li's work through discussions on continuity, computational strategies, selection consistency and oracle efficiency.
PENALIZED VARIABLE SELECTION PROCEDURE FOR COX MODELS WITH SEMIPARAMETRIC RELATIVE RISK.
TLDR
A penalized partial likelihood procedure is proposed to simultaneously estimate the parameters and select variables for both the parametric and the nonparametric parts of the Cox models with semiparametric relative risk, and it is shown that the resulting estimator of theparametric part possesses the oracle property, and that the estimators achieves the optimal rate of convergence.
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.
TLDR
This paper proposes a model selection procedure for nonparametric models, and explores the conditions under which the new method enjoys the aforementioned properties, and demonstrates that the new approach substantially outperforms other existing methods in the finite sample setting.
Robust Variable Selection With Exponential Squared Loss
TLDR
This article proposes a class of penalized robust regression estimators based on exponential squared loss that can achieve the highest asymptotic breakdown point of 1/2 and shows that their influence functions are bounded with respect to the outliers in either the response or the covariate domain.
Nonconcave penalized likelihood with a diverging number of parameters
A class of variable selection procedures for parametric models via nonconcave penalized likelihood was proposed by Fan and Li to simultaneously estimate parameters and select important variables.
Variable Selection and Empirical Likelihood based Inferenc for Measurement Error Data 1
Using nonconvex penalized least squares, we propose a class of variable selection procedures for linear models and partially linear models when the covariates are measured with additive error. The
Penalized robust estimators in logistic regression with applications to sparse models.
TLDR
A family of penalized weighted weighted $M-$type estimators for the logistic regression parameter that are stable against atypical data are introduced and the so--called Sign penalization is introduced.
Automatic model selection for partially linear models
Tuning parameter selection in penalized generalized linear models for discrete data
In recent years, we have seen an increased interest in the penalized likelihood methodology, which can be efficiently used for shrinkage and selection purposes. This strategy can also result in
...
...

References

SHOWING 1-10 OF 40 REFERENCES
Regularization of Wavelet Approximations
In this paper, we introduce nonlinear regularized wavelet estimators for estimating nonparametric regression functions when sampling points are not uniformly spaced. The approach can apply readily to
Regression Shrinkage and Selection via the Lasso
TLDR
A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Polynomial splines and their tensor products in extended linear modeling: 1994 Wald memorial lecture
Analysis of variance type models are considered for a regression function or for the logarithm of a probability function, conditional probability function, density function, conditional density
Wavelets in statistics: A review
The field of nonparametric function estimation has broadened its appeal in recent years with an array of new tools for statistical analysis. In particular, theoretical and applied research on the
The lasso method for variable selection in the Cox model.
TLDR
Simulations indicate that the lasso can be more accurate than stepwise selection in this setting and reduce the estimation variance while providing an interpretable final model in Cox's proportional hazards model.
Ideal spatial adaptation by wavelet shrinkage
SUMMARY With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline,
Penalized Regressions: The Bridge versus the Lasso
TLDR
It is shown that the bridge regression performs well compared to the lasso and ridge regression, and is demonstrated through an analysis of a prostate cancer data.
Heuristics of instability and stabilization in model selection
In model selection, usually a best predictor is chosen from a collection {μ(.,s)} of predictors where μ(.,s) is the minimum least-squares predictor in a collection U s of predictors. Here s is a
Minimax risk overlp-balls forlp-error
SummaryConsider estimating the mean vector θ from dataNn(θ,σ2I) withlq norm loss,q≧1, when θ is known to lie in ann-dimensionallp ball,p∈(0, ∞). For largen, the ratio of minimaxlinear risk to minimax
Smoothing noisy data with spline functions
SummarySmoothing splines are well known to provide nice curves which smooth discrete, noisy data. We obtain a practical, effective method for estimating the optimum amount of smoothing from the data.
...
...