• Corpus ID: 2685539

Using output codes to boost multiclass learning problems

@inproceedings{Schapire1997UsingOC,
  title={Using output codes to boost multiclass learning problems},
  author={Robert E. Schapire},
  booktitle={ICML},
  year={1997}
}
  • R. Schapire
  • Published in ICML 8 July 1997
  • Computer Science
This paper describes a new technique for solv- ing multiclass learning problems by combining Freund and Schapire's boosting algorithm with the main ideas of Diet- terich and Bakiri's method of error-correcting output codes (ECOC). Boosting is a general method of improving the ac- curacy of a given base or "weak" learning algorithm. ECOC is a robust method of solving multiclass learning problems by reducing to a sequence of two-class problems. We show that our new hybrid method has advantages of… 
Multiclass learning, boosting, and error-correcting codes
TLDR
ECC, that, by using a different weighting of the votes of the weak hypotheses, is able to improve on the performance of ADABOOST.OC, is arguably a more direct reduction of multiclass learning to binary learning problems than previous multiclass boosting algorithms.
Multi-Class Learning by Smoothed Boosting
TLDR
This paper proposes a new boosting algorithm, named “MSmoothBoost”, which introduces a smoothing mechanism into the boosting procedure to explicitly address the overfitting problem with AdaBoost.OC.
Multiclass boosting with repartitioning
TLDR
This paper proposes a new multiclass boosting algorithm that modifies the coding matrix according to the learning ability of the base learner, and shows experimentally that this algorithm is very efficient in optimizing the multiclass margin cost, and outperforms existing multiclass algorithms such as AdaBoost.
A smoothed boosting algorithm using probabilistic output codes
TLDR
A new boosting algorithm is proposed that improves the AdaBoost.OC algorithm for multi-class learning and introduces a probabilistic coding scheme to generate binary codes for multiple classes such that training errors can be efficiently reduced.
Improved Boosting Algorithms Using Confidence-rated Predictions
We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a
Improved Boosting Algorithms using Confidence-Rated Predictions
We describe several improvements to Freund and Schapire‘s AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a
Boosting Multiclass Learning with Repeating Codes
A long-standing goal of machine learning is to build a system which can detect a large number of classes with accuracy and efficiency. Some relationships between classes would become a scale-free
On the Consistency of Output Code Based Learning Algorithms for Multiclass Learning Problems
TLDR
This is the first work that comprehensively studies consistency properties of output code based methods for multiclass learning, and derives general conditions on the binary surrogate loss under which the one-vs-all and all-pairs code matrices yield consistent algorithms with respect to the multiclass 0-1 loss.
Minimal classification method with error-correcting codes for multiclass recognition
In this work, we develop an efficient technique to transform a multiclass recognition problem into a minimal binary classification problem using the Minimal Classification Method (MCM). The MCM
Multi-class AdaBoost ∗
TLDR
A new algorithm is proposed that naturally extends the original AdaBoost algorithm to the multiclass case without reducing it to multiple two-class problems and is extremely easy to implement and is highly competitive with the best currently available multi-class classification methods.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 27 REFERENCES
Solving Multiclass Learning Problems via Error-Correcting Output Codes
TLDR
It is demonstrated that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems.
Experiments with a New Boosting Algorithm
TLDR
This paper describes experiments carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems and compared boosting to Breiman's "bagging" method when used to aggregate various classifiers.
A decision-theoretic generalization of on-line learning and an application to boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.
A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.
The strength of weak learnability
TLDR
In this paper, it is shown that the two notions of learnability are equivalent, and a method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy.
Boosting Decision Trees
TLDR
A constructive, incremental learning system for regression problems that models data by means of locally linear experts that does not compete for data during learning and derives asymptotic results for this method.
Bias, Variance , And Arcing Classifiers
TLDR
This work explores two arcing algorithms, compares them to each other and to bagging, and tries to understand how arcing works, which is more sucessful than bagging in variance reduction.
Bagging, Boosting, and C4.5
TLDR
Results of applying Breiman's bagging and Freund and Schapire's boosting to a system that learns decision trees and testing on a representative collection of datasets show boosting shows the greater benefit.
Boosting the margin: A new explanation for the effectiveness of voting methods
TLDR
It is shown that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error.
C4.5: Programs for Machine Learning
TLDR
A complete guide to the C4.5 system as implemented in C for the UNIX environment, which starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting.
...
1
2
3
...