• Corpus ID: 235390403

Sign Consistency of the Generalized Elastic Net Estimator

@inproceedings{Zhu2021SignCO,
  title={Sign Consistency of the Generalized Elastic Net Estimator},
  author={Wencan Zhu and Eric Houngla Adjakossa and C'eline L'evy-Leduc and Nils Tern{\`e}s},
  year={2021}
}
In this paper, we propose a novel variable selection approach in the framework of highdimensional linear models where the columns of the design matrix are highly correlated. It consists in rewriting the initial high-dimensional linear model to remove the correlation between the columns of the design matrix and in applying a generalized Elastic Net criterion since it can be seen as an extension of the generalized Lasso. ‘e properties of our approach called gEN (generalized Elastic Net) are… 

Figures from this paper

References

SHOWING 1-10 OF 33 REFERENCES
On Model Selection Consistency of the Elastic Net When p >> n
We study the model selection property of the Elastic Net. In the classical settings when p (the number of predictors) and q (the number of predictors with non-zero coefficients in the true linear
Preconditioning the Lasso for sign consistency
TLDR
Results are provided that show F X can satisfy the irrepresentable condi- tion even when X fails to satisfy the condition, and a class of preconditioners to balance these costs and benefits is proposed.
Regularization and variable selection via the elastic net
TLDR
It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.
Regression Shrinkage and Selection via the Lasso
TLDR
A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Statistical Learning with Sparsity: The Lasso and Generalizations
TLDR
Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data and extract useful and reproducible patterns from big datasets.
High dimensional ordinary least squares projection for screening variables
TLDR
It is shown that HOLP has the sure screening property and gives consistent variable selection without the strong correlation assumption, and it has a low computational complexity.
On the Nonnegative Garrote Estimator
We study the nonnegative garrote estimator from three different aspects: computation, consistency and flexibility. We show that the nonnegative garrote estimate has a piecewise linear solution path.
Variable Selection for Highly Correlated Predictors
TLDR
A new Semi-standard PArtial Covariance (SPAC) which is able to reduce correlation effects from other predictors while incorporating the magnitude of coefficients and enjoys strong sign consistency in both finite-dimensional and high-dimensional settings under regularity conditions.
Adaptive estimation of a quadratic functional by model selection
We consider the problem of estimating ∥s∥ 2 when s belongs to some separable Hilbert space and one observes the Gaussian process Y(t) = (s, t) + σ L(t), for all t ∈ H, where L is some Gaussian
Supplement : Proofs and Technical Details for “ The Solution Path of the Generalized Lasso ”
In this document we give supplementary details to the paper " The Solution Path of the Generalized Lasso ". We use the prefix " GL " when referring to equations, sections, etc. in the original paper,
...
...