Sparse Regression: Scalable algorithms and empirical performance

@article{Bertsimas2019SparseRS,
  title={Sparse Regression: Scalable algorithms and empirical performance},
  author={D. Bertsimas and J. Pauphilet and B. V. Parys},
  journal={arXiv: Methodology},
  year={2019}
}
  • D. Bertsimas, J. Pauphilet, B. V. Parys
  • Published 2019
  • Computer Science, Mathematics
  • arXiv: Methodology
  • In this paper, we review state-of-the-art methods for feature selection in statistics with an application-oriented eye. Indeed, sparsity is a valuable property and the profusion of research on the topic might have provided little guidance to practitioners. We demonstrate empirically how noise and correlation impact both the accuracy - the number of correct features selected - and the false detection - the number of incorrect features selected - for five methods: the cardinality-constrained… CONTINUE READING
    22 Citations
    Robust subset selection
    • PDF
    Stochastic Discrete First-Order Algorithm for Feature Subset Selection
    • 2
    • PDF
    A Scalable Algorithm For Sparse Portfolio Selection
    • 7
    • PDF
    The Backbone Method for Ultra-High Dimensional Sparse Machine Learning
    • PDF
    A scalable algorithm for sparse and robust portfolios
    • 4
    • PDF
    A MIXED-INTEGER FRACTIONAL OPTIMIZATION APPROACH TO BEST SUBSET SELECTION
    • 7
    • PDF

    References

    SHOWING 1-10 OF 60 REFERENCES
    Efficient and Effective $L_0$ Feature Selection
    • 4
    • PDF
    Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$ -Constrained Quadratic Programming (Lasso)
    • M. Wainwright
    • Mathematics, Computer Science
    • IEEE Transactions on Information Theory
    • 2009
    • 1,040
    • PDF
    Logistic Regression: From Art to Science
    • 40
    • PDF
    Sparse High-Dimensional Regression: Exact Scalable Algorithms and Phase Transitions
    • 59
    • PDF
    Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting
    • M. Wainwright
    • Mathematics, Computer Science
    • IEEE Transactions on Information Theory
    • 2009
    • 281
    • PDF
    False Discoveries Occur Early on the Lasso Path
    • 95
    • PDF
    Sparse learning via Boolean relaxations
    • 42
    • PDF
    Extended Comparisons of Best Subset Selection, Forward Stepwise Selection, and the Lasso
    • 100
    • PDF