Margins, Shrinkage, and Boosting

@inproceedings{Telgarsky2013MarginsSA,
  title={Margins, Shrinkage, and Boosting},
  author={Matus Telgarsky},
  booktitle={ICML},
  year={2013}
}
This manuscript shows that AdaBoost and its immediate variants can produce approximate maximum margin classifiers simply by scaling step size choices with a fixed small constant. In this way, when the unscaled step size is an optimal choice, these results provide guarantees for Friedman’s empirically successful “shrinkage” procedure for gradient boosting (Friedman, 2000). Guarantees are also provided for a variety of other step sizes, affirming the intuition that increasingly regularized line… CONTINUE READING

References

Publications referenced by this paper.
Showing 1-10 of 24 references

The dynamics of AdaBoost: cyclic behavior and convergence of margins

Rudin, Cynthia, +3 authors E Robert
Journal of Machine Learning Research, • 2004
View 5 Excerpts
Highly Influenced

Soft Margins for AdaBoost

View 9 Excerpts
Highly Influenced

A Primal-Dual Convergence Analysis of Boosting

Journal of Machine Learning Research • 2012

Boosting: Foundations and Algorithms

Schapire, E Robert, Freund, Yoav
2012

The only other thing to check is that ` ∈ G, the class of losses considered by Telgarsky (2012

2012

Scikit-learn: Machine Learning in Python

Journal of Machine Learning Research • 2011

Similar Papers

Loading similar papers…