Optimization Methods for ` 1-Regularization

@inproceedings{Schmidt2009OptimizationMF,
  title={Optimization Methods for ` 1-Regularization},
  author={Mark Schmidt},
  year={2009}
}
In this paper we review and compare state-of-the-art optimization techniques for solving the problem of minimizing a twice-differentiable loss function subject to `1-regularization. The first part of this work outlines a variety of the approaches that are available to solve this type of problem, highlighting some of their strengths and weaknesses. In the second part, we present numerical results comparing 14 optimization strategies under various scenarios. 

References

Publications referenced by this paper.
Showing 1-10 of 34 references

Learning sparse bayesian classifiers: multi-class formulation, fast algorithms, and generalization bounds

B. Krishnapuram, L. Carin, M. Figueiredo, A. Hartemink
IEEE. Trans. Pattern. Anal. Mach. Intell., • 2005
View 6 Excerpts
Highly Influenced

Penalized regressions: The bridge versus the LASSO

W. Fu
J. Comput. Graph. Stat., • 1998
View 5 Excerpts
Highly Influenced

An Interior-Point Method for Large-Scale l1-Regularized Logistic Regression

Journal of Machine Learning Research • 2007
View 4 Excerpts
Highly Influenced

Convex Optimization

S. Boyd, L. Vandenberghe
2004
View 3 Excerpts
Highly Influenced

SSVM: A Smooth Support Vector Machine for Classification

Comp. Opt. and Appl. • 2001
View 3 Excerpts
Highly Influenced

An interior-point method for large-scale l1-regularized least squares

S. Kim, K. Koh, M. Lustig, S. Boyd, D. Gorinevsky
Selected Topics in Signal Processing. IEEE Journal of, • 2007

Similar Papers

Loading similar papers…