• Corpus ID: 247618921

Pattern recovery by SLOPE

@inproceedings{Bogdan2022PatternRB,
  title={Pattern recovery by SLOPE},
  author={Małgorzata Bogdan and Xavier Dupuis and Piotr Graczyk and Bartosz Kołodziejek and Tomasz Skalski and Patrick J. C. Tardivel and Maciej Wilczy'nski},
  year={2022}
}
LASSO and SLOPE are two popular methods for dimensionality reduction in the high-dimensional regression. LASSO can eliminate redundant predictors by setting the corresponding regression coefficients to zero, while SLOPE can additionally identify clusters of variables with the same absolute values of regression coefficients. It is well known that LASSO Irrepresentability Condition is sufficient and necessary for the proper estimation of the sign of sufficiently large regression coefficients. In this… 

Figures from this paper

Sparse Graphical Modelling via the Sorted L$_1$-Norm

TLDR
Two new graphical model approaches, Gslope and Tslope, are proposed, which provide sparse estimates of the precision matrix by penalizing its sorted data, and relying on Gaussian and T-student data, respectively, and empirically control the False Discovery Rate for the block diagonal covariance matrices.

References

SHOWING 1-10 OF 53 REFERENCES

Pattern recovery and signal denoising by SLOPE when the design matrix is orthogonal

TLDR
This article introduces the SLOPE pattern, i.e., the set of relations between the true regression coefficients, which can be identified by SLOPE, and presents new results on the strong consistency of SLOPE estimators and on theStrong consistency of pattern recovery by SLope when the design matrix is orthogonal.

Group SLOPE – Adaptive Selection of Groups of Predictors

TLDR
It is proved that the resulting procedure adapts to unknown sparsity and is asymptotically minimax with respect to the estimation of the proportions of variance of the response variable explained by regressors from different groups.

On the Asymptotic Properties of SLOPE

TLDR
New asymptotic results on the properties of SLOPE when the elements of the design matrix are iid random variables from the Gaussian distribution are provided.

Regression Shrinkage and Selection via the Lasso

TLDR
A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.

SLOPE-ADAPTIVE VARIABLE SELECTION VIA CONVEX OPTIMIZATION.

TLDR
SLOPE, short for Sorted L-One Penalized Estimation, is the solution to λBH and appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data.

Simple expressions of the LASSO and SLOPE estimators in low-dimension

TLDR
It is stated that, even if the design is not orthogonal, evenif residuals are correlated, up to a transformation, the LASSO and SLOPE estimators have a simple expression based on the Best Linear Unbiased Estimator (BLUE).

High-dimensional graphs and variable selection with the Lasso

TLDR
It is shown that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs and is hence equivalent to variable selection for Gaussian linear models.

The Geometry of Uniqueness, Sparsity and Clustering in Penalized Estimation

TLDR
The notion of a SLOPE model is defined to describe both sparsity and clustering properties of this method and also provide a geometric characterization of accessible SLOPE models.

The Lasso Problem and Uniqueness

TLDR
The LARS algorithm is extended to cover the non-unique case, so that this path algorithm works for any predictor matrix and a simple method is derived for computing the component-wise uncertainty in lasso solutions of any given problem instance, based on linear programming.

On Model Selection Consistency of Lasso

TLDR
It is proved that a single condition, which is called the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large.
...