Corpus ID: 6016661

Regularizing AdaBoost

@inproceedings{Rtsch1998RegularizingA,
  title={Regularizing AdaBoost},
  author={G. R{\"a}tsch and T. Onoda and K. M{\"u}ller},
  booktitle={NIPS},
  year={1998}
}
Boosting methods maximize a hard classification margin and are known as powerful techniques that do not exhibit overfitting for low noise cases. Also for noisy data boosting will try to enforce a hard margin and thereby give too much weight to outliers, which then leads to the dilemma of non-smooth fits and overfitting. Therefore we propose three algorithms to allow for soft margin classification by introducing regularization with slack variables into the boosting concept: (1) AdaBoostreg and… Expand
Soft Margins for AdaBoost
TLDR
It is found that ADABOOST asymptotically achieves a hard margin distribution, i.e. the algorithm concentrates its resources on a few hard-to-learn patterns that are interestingly very similar to Support Vectors. Expand
Boosting the Margin Distribution
TLDR
A boosting strategy to optimise the generalisation bound obtained recently by Shawe-Taylor and Cristianini in terms of the two norm of the slack variables is considered, which achieves significant improvements over Adaboost. Expand
A New Boosting Algorithm Using Input-Dependent Regularizer
TLDR
Empirical studies on eight dierent UCI data sets and one text categorization data set show that WeightBoost almost always achieves a considerably better classification accuracy than AdaBoost, and experiments on data with artificially controlled noise indicate that the WeightBoost is more robust to noise than Ada boost. Expand
Smoothed Emphasis for Boosting Ensembles
TLDR
A simple modification is introduced which uses the neighborhood concept to reduce the above drawbacks and experimental results confirm the potential of the proposed scheme. Expand
Multi-Class Learning by Smoothed Boosting
TLDR
This paper proposes a new boosting algorithm, named “MSmoothBoost”, which introduces a smoothing mechanism into the boosting procedure to explicitly address the overfitting problem with AdaBoost.OC. Expand
Edited AdaBoost by weighted kNN
TLDR
An edited AdaBoost by weighted kNN (EAdaBoost ) is designed where AdaBoost and kNN naturally complement each other and the new Boosting algorithm almost always achieves considerably better classification accuracy than AdaBoost. Expand
Robust Boosting via Convex Optimization: Theory and Applications
TLDR
It is shown that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable and derive convergence guarantees for a quite general family of boosting algorithms. Expand
A smoothed boosting algorithm using probabilistic output codes
TLDR
A new boosting algorithm is proposed that improves the AdaBoost.OC algorithm for multi-class learning and introduces a probabilistic coding scheme to generate binary codes for multiple classes such that training errors can be efficiently reduced. Expand
On Boosting Improvement: Error Reduction and Convergence Speed-Up
TLDR
This article proposes a slight modification of the weight update rule of the algorithm ADABOOST, and shows that by exploiting an adaptive measure of a local entropy, computed from a neighborhood graph built on the examples, it is possible to identify not only the outliers but also the examples located in the Bayesian error region. Expand
Neighborhood Guided Smoothed Emphasis for Real Adaboost Ensembles
TLDR
The neighborhood concept is used to design a simple modification of the emphasis mechanisms that is able to deal with imbalanced or asymmetric problems and can also be combined with other emphasis control mechanisms. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 23 REFERENCES
Boosting in the Limit: Maximizing the Margin of Learned Ensembles
TLDR
The crucial question as to why boosting works so well in practice, and how to further improve upon it, remains mostly open, and it is concluded that no simple version of the minimum-margin story can be complete. Expand
Boosting the margin: A new explanation for the effectiveness of voting methods
TLDR
It is shown that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. Expand
Improved Boosting Algorithms Using Confidence-rated Predictions
We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give aExpand
An asymptotic analysis of AdaBoost in the binary classification case
TLDR
The paper shows asymptotic experimental results for the binary classification case and the relation between the model complexity and noise in the training data, and how to improve AdaBoost type algorithms in practice are discussed. Expand
AdaBoosting Neural Networks: Application to on-line Character Recognition
TLDR
AdaBoost is used to improve the performances of a strong learning algorithm: a neural network based on-line character recognition system and it is shown that it can be used to learn automatically a great variety of writing styles even when the amount of training data for each style varies a lot. Expand
Arcing Classifiers
Recent work has shown that combining multiple versions of unstable classifiers such as trees or neural nets results in reduced test set error. One of the more effective is bagging (Breiman [1996a])Expand
Bagging predictors
TLDR
Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. Expand
Boosting First-Order Learning
TLDR
Early experimental results from applying boosting to ffoil, a first-order system that constructs definitions of functional relations, suggest that boosting will also prove beneficial for first- order induction. Expand
Prediction Games and Arcing Algorithms
  • L. Breiman
  • Mathematics, Computer Science
  • Neural Computation
  • 1999
TLDR
The theory behind the success of adaptive reweighting and combining algorithms (arcing) such as Adaboost and others in reducing generalization error has not been well understood, and an explanation of whyAdaboost works in terms of its ability to produce generally high margins is offered. Expand
Learning algorithms for classification: A comparison on handwritten digit recognition
This paper compares the performance of several classi er algorithms on a standard database of handwritten digits. We consider not only raw accuracy, but also training time, recognition time, andExpand
...
1
2
3
...