# Solving Multiclass Learning Problems via Error-Correcting Output Codes

@article{Dietterich1995SolvingML, title={Solving Multiclass Learning Problems via Error-Correcting Output Codes}, author={Thomas G. Dietterich and Ghulum Bakiri}, journal={ArXiv}, year={1995}, volume={cs.AI/9501101} }

Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k > 2 values (i.e., k "classes"). The definition is acquired by studying collections of training examples of the form (xi, f(xi)). Existing approaches to multiclass learning problems include direct application of multiclass algorithms such as the decision-tree algorithms C4.5 and CART, application of binary concept learning algorithms to learn individual binary… Expand

#### Figures, Tables, and Topics from this paper

#### 2,840 Citations

On the Consistency of Output Code Based Learning Algorithms for Multiclass Learning Problems

- Mathematics, Computer Science
- COLT
- 2014

This is the first work that comprehensively studies consistency properties of output code based methods for multiclass learning, and derives general conditions on the binary surrogate loss under which the one-vs-all and all-pairs code matrices yield consistent algorithms with respect to the multiclass 0-1 loss. Expand

Using output codes to boost multiclass learning problems

- Computer Science
- ICML
- 1997

This paper describes a new technique for multiclass learning problems by combining Freund and Schapire's boosting algorithm with the main ideas of Diet- terich and Bakiri's method of error-correcting output codes (ECOC), and shows that the new hybrid method has advantages of both. Expand

Stochastic Organization of Output Codes in Multiclass Learning Problems

- Mathematics, Computer Science
- Neural Computation
- 2001

This work presents a novel algorithm that applies a maximum-likelihood objective function in conjunction with the expectation-maximization (EM) algorithm, and shows the potential gain of the optimized output codes over OPC or ECOC methods. Expand

evolutionary Design of Code-matrices for Multiclass Problems

- Computer Science
- Soft Computing for Knowledge Discovery and Data Mining
- 2008

This chapter presents a survey on techniques for multiclass problems code-matrix design, and shows how evolutionary techniques can be employed to solve this problem. Expand

Active learning with error-correcting output codes

- Computer Science
- Neurocomputing
- 2019

A novel multi-class active learning algorithm to tackle the above problems and select the most informative instances, called active learning with error-correcting output codes (ECOCAL), which outperforms several state-of-the-art active learning methods on both binary and multi- class datasets. Expand

Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers

- Mathematics, Computer Science
- J. Mach. Learn. Res.
- 2000

A general method for combining the classifiers generated on the binary problems is proposed, and a general empirical multiclass loss bound is proved given the empirical loss of the individual binary learning algorithms. Expand

Multiclass boosting with repartitioning

- Computer Science
- ICML
- 2006

This paper proposes a new multiclass boosting algorithm that modifies the coding matrix according to the learning ability of the base learner, and shows experimentally that this algorithm is very efficient in optimizing the multiclass margin cost, and outperforms existing multiclass algorithms such as AdaBoost. Expand

Multiclass learning, boosting, and error-correcting codes

- Mathematics, Computer Science
- COLT '99
- 1999

ECC, that, by using a different weighting of the votes of the weak hypotheses, is able to improve on the performance of ADABOOST.OC, is arguably a more direct reduction of multiclass learning to binary learning problems than previous multiclass boosting algorithms. Expand

Adaptive Error-Correcting Output Codes

- Computer Science
- IJCAI
- 2013

This paper reformulate the ECOC models from the perspective of multi-task learning, where the binary classifiers are learned in a common subspace of data, and presents the kernel extension of the proposed model. Expand

Subclass Problem-Dependent Design for Error-Correcting Output Codes

- Computer Science, Medicine
- IEEE Transactions on Pattern Analysis and Machine Intelligence
- 2008

A novel strategy to model multiclass classification problems using subclass information in the ECOC framework is presented and it is shown that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. Expand

#### References

SHOWING 1-10 OF 55 REFERENCES

Using Decision Trees to Improve Case-Based Learning

- Computer Science
- ICML
- 1993

Results clearly indicate that decision trees can be used to improve the performance of CBL systems and do so without reliance on potentially expensive expert knowledge. Expand

FUNCTION MODELING EXPERIMENTS.

- Computer Science
- 1963

The results of an experimental investigation of the capabilities and the limitations of trainable machines for use in function modeling have been presented, finding that for more difficult applications, the machine performance was sufficiently good to make the speed advantages of a training machine a significant consideration. Expand

Why Error Correcting Output Coding Works

- 1994

Previous research has shown that a technique called error correcting output coding ECOC can dramatically improve the classi cation accuracy of supervised learning algorithms that learn to classify… Expand

An improved boosting algorithm and its implications on learning complexity

- Mathematics, Computer Science
- COLT '92
- 1992

The main result is an improvement of the boosting-by-majority algorithm, which shows that the majority rule is the optimal rule for combining general weak learners and extends the boosting algorithm to concept classes that give multi-valued labels and real-valuedlabel. Expand

Training Stochastic Model Recognition Algorithms as Networks can Lead to Maximum Mutual Information Estimation of Parameters

- Computer Science
- NIPS
- 1989

It is shown that once the output layer of a multilayer perceptron is modified to provide mathematically correct probability distributions, and the usual squared error criterion is replaced with a probability-based score, the result is equivalent to Maximum Mutual Information training. Expand

Connectionist Learning Procedures

- Computer Science, Mathematics
- Artif. Intell.
- 1989

These relatively simple, gradient-descent learning procedures work well for small tasks and the new challenge is to find ways of improving their convergence rate and their generalization abilities so that they can be applied to larger, more realistic tasks. Expand

When Networks Disagree: Ensemble Methods for Hybrid Neural Networks

- Mathematics
- 1992

Abstract : This paper presents a general theoretical framework for ensemble methods of constructing significantly improved regression estimates. Given a population of regression estimators, the… Expand

Neural Network Classifiers Estimate Bayesian a posteriori Probabilities

- Computer Science, Medicine
- Neural Computation
- 1991

Results of Monte Carlo simulations performed using multilayer perceptron (MLP) networks trained with backpropagation, radial basis function (RBF) networks, and high-order polynomial networks graphically demonstrate that network outputs provide good estimates of Bayesian probabilities. Expand

Backpropagation Applied to Handwritten Zip Code Recognition

- Computer Science
- Neural Computation
- 1989

This paper demonstrates how constraints from the task domain can be integrated into a backpropagation network through the architecture of the network, successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. Expand

Converting English text to speech: a machine learning approach

- Computer Science
- 1991

A set of machine learning methods for automatically constructing letter-to-sound rules by analyzing a dictionary of words and their pronunciations are presented, showing that error-correcting output codes provide a domain-independent, algorithm-independent approach to multiclass learning problems. Expand