Experiments with a New Boosting Algorithm

Abstract

In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a “pseudo-loss” which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman’s “bagging” method when used to aggregate various classifiers (including decision trees and single attributevalue tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem.

Extracted Key Phrases

8 Figures and Tables

0200400'96'98'00'02'04'06'08'10'12'14'16
Citations per Year

6,523 Citations

Semantic Scholar estimates that this publication has 6,523 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Freund1996ExperimentsWA, title={Experiments with a New Boosting Algorithm}, author={Yoav Freund and Robert E. Schapire}, booktitle={ICML}, year={1996} }