On the non‐negative garrotte estimator

@article{Yuan2007OnTN,
  title={On the non‐negative garrotte estimator},
  author={Ming Yuan and Yi Lin},
  journal={Journal of the Royal Statistical Society: Series B (Statistical Methodology)},
  year={2007},
  volume={69}
}
  • M. Yuan, Yi Lin
  • Published 1 April 2007
  • Mathematics
  • Journal of the Royal Statistical Society: Series B (Statistical Methodology)
Summary.  We study the non‐negative garrotte estimator from three different aspects: consistency, computation and flexibility. We argue that the non‐negative garrotte is a general procedure that can be used in combination with estimators other than the original least squares estimator as in its original form. In particular, we consider using the lasso, the elastic net and ridge regression along with ordinary least squares as the initial estimate in the non‐negative garrotte. We prove that the… 
Variable selection in additive models by non-negative garrote
TLDR
Breiman’s non-negative garrote method is adapted to perform variable selection in non-parametric additive models and provides accurate predictions and is effective at identifying the variables generating the model.
Consistency and robustness properties of the S-nonnegative garrote estimator
ABSTRACT This paper concerns a robust variable selection method in multiple linear regression: the robust S-nonnegative garrote variable selection method. In this paper the consistency of the method,
The Adaptive Gril Estimator with a Diverging Number of Parameters
TLDR
The grouped selection property for AdaCnet method (one type of AdaGril) in the equal correlation case is highlighted and it is shown that AdaGrils estimator achieves a Sparsity Inequality, i.e., a bound in terms of the number of non-zero components of the “true” regression coefficient.
Balancing stability and bias reduction in variable selection with the Mnet estimator
Summary. We propose a new penalized approach for variable selection using a combination of minimax concave and ridge penalties. The proposed method is designed to deal with p n problems with highly
Uniformly valid confidence sets based on the Lasso
TLDR
In a linear regression model of fixed dimension, in finite samples with Gaussian errors and asymptotically in the case where the Lasso estimator is tuned to perform conservative model selection, exact formulas are derived for computing the minimal coverage probability over the entire parameter space for a large class of shapes for the confidence sets, thus enabling the construction of valid confidence regions based on the Lello estimator in these settings.
An ordinary differential equation-based solution path algorithm
  • Yichao Wu
  • Computer Science, Mathematics
    Journal of nonparametric statistics
  • 2011
TLDR
This work proposes an extension of the LAR for generalised linear models and the quasi-likelihood model by showing that the corresponding solution path is piecewise given by solutions of ordinary differential equation (ODE) systems.
Estimation Consistency of the Group Lasso and its Applications
TLDR
The main theorem shows that the group Lasso achieves estimation consistency under a mild condition and an asymptotic upper bound on the number of selected variables can be obtained.
Some Notes on the Nonnegative Garrote
TLDR
The main result is that, compared with other penalized least-squares methods, the NG has a natural selection of penalty function according to an estimator of prediction risk, indicating that to select tuning parameters, it may be unnecessary to optimize a model selection criterion repeatedly.
Model-Consistent Sparse Estimation through the Bootstrap
  • F. Bach
  • Computer Science, Mathematics
    ArXiv
  • 2009
TLDR
This paper first presents a detailed asymptotic analysis of model consistency of the Lasso in low-dimensional settings, and shows that if the authors run theLasso for several bootstrapped replications of a given sample, then intersecting the supports ofThe Lasso bootstrap estimates leads to consistent model selection.
Resampling-Based Variable Selection with Lasso for p >> n and Partially Linear Models
  • Mihaela A. MaresYike Guo
  • Computer Science
    2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)
  • 2015
TLDR
It is demonstrated theoretically that false positives are likely to be selected by the Lasso method due to a small proportion of samples, which happen to explain some variation in the response variable, and a novel consistent variable selection algorithm is proposed based on this property.
...
...

References

SHOWING 1-10 OF 22 REFERENCES
On the LASSO and its dual
TLDR
Consideration of the primal and dual problems together leads to important new insights into the characteristics of the LASSO estimator and to an improved method for estimating its covariance matrix.
Better subset regression using the nonnegative garrote
A new method, called the nonnegative (nn) garrote, is proposed for doing subset regression. It both shrinks and zeroes coefficients. In tests on real and simulated data, it produces lower prediction
Model selection and estimation in regression with grouped variables
Summary.  We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor
Regression Shrinkage and Selection via the Lasso
TLDR
A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
The Adaptive Lasso and Its Oracle Properties
TLDR
A new version of the lasso is proposed, called the adaptive lasso, where adaptive weights are used for penalizing different coefficients in the ℓ1 penalty, and the nonnegative garotte is shown to be consistent for variable selection.
Regularization and variable selection via the elastic net
TLDR
It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.
Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
TLDR
In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.
High-dimensional graphs and variable selection with the Lasso
TLDR
It is shown that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs and is hence equivalent to variable selection for Gaussian linear models.
Heuristics of instability and stabilization in model selection
In model selection, usually a best predictor is chosen from a collection {μ(.,s)} of predictors where μ(.,s) is the minimum least-squares predictor in a collection U s of predictors. Here s is a
Calibration and empirical Bayes variable selection
For the problem of variable selection for the normal linear model, selection criteria such as AIC, C p , BIC and RIC have fixed dimensionality penalties. Such criteria are shown to correspond to
...
...