SLOPE-ADAPTIVE VARIABLE SELECTION VIA CONVEX OPTIMIZATION.

@article{Bogdan2015SLOPEADAPTIVEVS,
  title={SLOPE-ADAPTIVE VARIABLE SELECTION VIA CONVEX OPTIMIZATION.},
  author={Małgorzata Maria Bogdan and Ewout van den Berg and C. Sabatti and Weijie J. Su and Emmanuel J. Cand{\`e}s},
  journal={The annals of applied statistics},
  year={2015},
  volume={9 3},
  pages={
          1103-1140
        }
}
  • Małgorzata Maria Bogdan, Ewout van den Berg, +2 authors Emmanuel J. Candès
  • Published in
    The annals of applied…
    2015
  • Medicine, Mathematics
  • We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ + z, where X has dimensions n × p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to [Formula: see text]where λ1 ≥ λ2 ≥ … ≥ λ p ≥ 0 and [Formula: see text] are the decreasing absolute values of the entries of b. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical ℓ1… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    Figures, Tables, and Topics from this paper.

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 98 CITATIONS

    Sparse portfolio selection via the sorted ℓ1-Norm

    VIEW 11 EXCERPTS
    CITES METHODS, RESULTS & BACKGROUND
    HIGHLY INFLUENCED

    Approximate selective inference via maximum likelihood

    VIEW 5 EXCERPTS
    CITES METHODS
    HIGHLY INFLUENCED

    Does SLOPE outperform bridge regression?

    VIEW 4 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    Parameter Estimation with the Ordered 𝓁2 Regularization via an Alternating Direction Method of Multipliers

    VIEW 7 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    Sparse Portfolio Selection via the sorted $\ell_{1}$-Norm

    VIEW 10 EXCERPTS
    CITES RESULTS, METHODS & BACKGROUND
    HIGHLY INFLUENCED

    Asymptotics and Optimal Designs of SLOPE for Sparse Linear Regression

    • Hong Hu, Yue M. Lu
    • Mathematics, Computer Science
    • 2019 IEEE International Symposium on Information Theory (ISIT)
    • 2019
    VIEW 10 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    Extracting Compact Knowledge From Massive Data

    VIEW 5 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    FILTER CITATIONS BY YEAR

    2014
    2020

    CITATION STATISTICS

    • 28 Highly Influenced Citations

    • Averaged 24 Citations per year from 2017 through 2019

    • 43% Increase in citations per year in 2019 over 2018

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 44 REFERENCES

    Proximal Algorithms

    VIEW 1 EXCERPT