Corpus ID: 1836349

Experiments with a New Boosting Algorithm

@inproceedings{Freund1996ExperimentsWA,
  title={Experiments with a New Boosting Algorithm},
  author={Yoav Freund and Robert E. Schapire},
  booktitle={ICML},
  year={1996}
}
In an earlier paper, we introduced a new "boosting" algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that con- sistently generates classifiers whose performance is a little better than random guessing. [...] Key Result In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem.Expand
A New Boosting Algorithm Using Input-Dependent Regularizer
TLDR
Empirical studies on eight dierent UCI data sets and one text categorization data set show that WeightBoost almost always achieves a considerably better classification accuracy than AdaBoost, and experiments on data with artificially controlled noise indicate that the WeightBoost is more robust to noise than Ada boost. Expand
An efficient modified boosting method for solving classification problems
Based on the Adaboost algorithm, a modified boosting method is proposed in this paper for solving classification problems. This method predicts the class label of an example as the weighted majorityExpand
Improved Boosting Algorithms Using Confidence-rated Predictions
We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give aExpand
Boosting Neural Networks
TLDR
It is suggested that random resampling of the training data is not the main explanation of the success of the improvements brought by Ada Boost, and training methods based on sampling the training set and weighting the cost function are compared. Expand
Improved Boosting Algorithms using Confidence-Rated Predictions
We describe several improvements to Freund and Schapire‘s AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give aExpand
Stopping Criterion for Boosting-Based Data Reduction Techniques: from Binary to Multiclass Problem
TLDR
The aim of the present paper is to relax the class constraint, and extend the contribution to multiclass problems, showing the benefits that the boosting-derived weighting rule brings to weighted nearest neighbor classifiers. Expand
Supervised projection approach for boosting classifiers
TLDR
A new approach for boosting methods for the construction of ensembles of classifiers, based on using the distribution given by the weighting scheme of boosting to construct a non-linear supervised projection of the original variables, instead of using the weights of the instances to train the next classifier. Expand
Quadratic boosting
TLDR
The quadratic boosting algorithm converges under the condition that the given base learner minimizes the empirical error and is shown to compare favorably with AdaBoost on large data sets at the cost of training speed. Expand
An Empirical Boosting Scheme for ROC-Based Genetic Programming Classifiers
TLDR
A geometrical interpretation of the ROC curve to attribute an error measure to every training case is proposed and proposed to compare boosted Genetic Programming performance with published results on ROC-based Evolution Strategies and Support Vector Machines. Expand
Training Methods for Adaptive Boosting of Neural Networks
TLDR
This paper uses AdaBoost to improve the performances of neural networks and compares training methods based on sampling the training set and weighting the cost function. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 34 REFERENCES
A decision-theoretic generalization of on-line learning and an application to boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. Expand
Improving Performance in Neural Networks Using a Boosting Algorithm
TLDR
The effect of boosting is reported on four databases consisting of 12,000 digits from segmented ZIP codes from the United State Postal Service and the following from the National Institute of Standards and Testing (NIST). Expand
A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. Expand
Applying the Waek Learning Framework to Understand and Improve C4.5
TLDR
This paper performs experiments suggested by the formal results for Adaboost and C4:5 within the weak learning framework, and argues through experimental results that the theory must be understood in terms of a measure of a boosting algorithm's behavior called its advantage sequence. Expand
On the boosting ability of top-down decision tree learning algorithms
TLDR
This work analyzes the performance of top-down algorithms for decision tree learning and proves that some popular and empirically successful heuristics that are based on first principles meet the criteria of an independently motivated theoretical model. Expand
Boosting and Other Ensemble Methods
TLDR
A surprising result is shown for the original boosting algorithm: namely, that as the training set size increases, the training error decreases until it asymptotes to the test error rate. Expand
Bias, Variance , And Arcing Classifiers
TLDR
This work explores two arcing algorithms, compares them to each other and to bagging, and tries to understand how arcing works, which is more sucessful than bagging in variance reduction. Expand
Boosting Decision Trees
TLDR
A constructive, incremental learning system for regression problems that models data by means of locally linear experts that does not compete for data during learning and derives asymptotic results for this method. Expand
C4.5: Programs for Machine Learning
TLDR
A complete guide to the C4.5 system as implemented in C for the UNIX environment, which starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting. Expand
Boosting Performance in Neural Networks
TLDR
The boosting algorithm is used to construct an ensemble of neural networks that significantly improves performance (compared to a single network) in optical character recognition (OCR) problems and improved performance significantly, and, in some cases, dramatically. Expand
...
1
2
3
4
...