# Pattern recovery by SLOPE

@inproceedings{Bogdan2022PatternRB, title={Pattern recovery by SLOPE}, author={Małgorzata Bogdan and Xavier Dupuis and Piotr Graczyk and Bartosz Kołodziejek and Tomasz Skalski and Patrick J. C. Tardivel and Maciej Wilczy'nski}, year={2022} }

LASSO and SLOPE are two popular methods for dimensionality reduction in the high-dimensional regression. LASSO can eliminate redundant predictors by setting the corresponding regression coefﬁcients to zero, while SLOPE can additionally identify clusters of variables with the same absolute values of regression coefﬁcients. It is well known that LASSO Irrepresentability Condition is sufﬁcient and necessary for the proper estimation of the sign of sufﬁciently large regression coefﬁcients. In this…

## One Citation

### Sparse Graphical Modelling via the Sorted L$_1$-Norm

- Computer Science
- 2022

Two new graphical model approaches, Gslope and Tslope, are proposed, which provide sparse estimates of the precision matrix by penalizing its sorted data, and relying on Gaussian and T-student data, respectively, and empirically control the False Discovery Rate for the block diagonal covariance matrices.

## References

SHOWING 1-10 OF 53 REFERENCES

### Pattern recovery and signal denoising by SLOPE when the design matrix is orthogonal

- Computer Science
- 2022

This article introduces the SLOPE pattern, i.e., the set of relations between the true regression coeﬃcients, which can be identiﬁed by SLOPE, and presents new results on the strong consistency of SLOPE estimators and on theStrong consistency of pattern recovery by SLope when the design matrix is orthogonal.

### Group SLOPE – Adaptive Selection of Groups of Predictors

- MathematicsJournal of the American Statistical Association
- 2019

It is proved that the resulting procedure adapts to unknown sparsity and is asymptotically minimax with respect to the estimation of the proportions of variance of the response variable explained by regressors from different groups.

### On the Asymptotic Properties of SLOPE

- Computer Science, MathematicsSankhya A
- 2020

New asymptotic results on the properties of SLOPE when the elements of the design matrix are iid random variables from the Gaussian distribution are provided.

### Regression Shrinkage and Selection via the Lasso

- Computer Science
- 1996

A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.

### SLOPE-ADAPTIVE VARIABLE SELECTION VIA CONVEX OPTIMIZATION.

- Mathematics, Computer ScienceThe annals of applied statistics
- 2015

SLOPE, short for Sorted L-One Penalized Estimation, is the solution to λBH and appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data.

### Simple expressions of the LASSO and SLOPE estimators in low-dimension

- Mathematics, Computer Science
- 2020

It is stated that, even if the design is not orthogonal, evenif residuals are correlated, up to a transformation, the LASSO and SLOPE estimators have a simple expression based on the Best Linear Unbiased Estimator (BLUE).

### High-dimensional graphs and variable selection with the Lasso

- Computer Science
- 2006

It is shown that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs and is hence equivalent to variable selection for Gaussian linear models.

### The Geometry of Uniqueness, Sparsity and Clustering in Penalized Estimation

- Mathematics, Computer Science
- 2020

The notion of a SLOPE model is defined to describe both sparsity and clustering properties of this method and also provide a geometric characterization of accessible SLOPE models.

### The Lasso Problem and Uniqueness

- Computer Science, Mathematics
- 2012

The LARS algorithm is extended to cover the non-unique case, so that this path algorithm works for any predictor matrix and a simple method is derived for computing the component-wise uncertainty in lasso solutions of any given problem instance, based on linear programming.

### On Model Selection Consistency of Lasso

- Computer ScienceJ. Mach. Learn. Res.
- 2006

It is proved that a single condition, which is called the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large.