Solving Multiclass Learning Problems via Error-Correcting Output Codes

@article{Dietterich1994SolvingML,
  title={Solving Multiclass Learning Problems via Error-Correcting Output Codes},
  author={Thomas G. Dietterich and Ghulum Bakiri},
  journal={J. Artif. Intell. Res.},
  year={1994},
  volume={2},
  pages={263-286}
}
Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k > 2 values (i.e., k "classes"). The definition is acquired by studying collections of training examples of the form (xi, f(xi)). Existing approaches to multiclass learning problems include direct application of multiclass algorithms such as the decision-tree algorithms C4.5 and CART, application of binary concept learning algorithms to learn individual binary… 

On the Consistency of Output Code Based Learning Algorithms for Multiclass Learning Problems

This is the first work that comprehensively studies consistency properties of output code based methods for multiclass learning, and derives general conditions on the binary surrogate loss under which the one-vs-all and all-pairs code matrices yield consistent algorithms with respect to the multiclass 0-1 loss.

Using output codes to boost multiclass learning problems

This paper describes a new technique for multiclass learning problems by combining Freund and Schapire's boosting algorithm with the main ideas of Diet- terich and Bakiri's method of error-correcting output codes (ECOC), and shows that the new hybrid method has advantages of both.

evolutionary Design of Code-matrices for Multiclass Problems

This chapter presents a survey on techniques for multiclass problems code-matrix design, and shows how evolutionary techniques can be employed to solve this problem.

Stochastic Organization of Output Codes in Multiclass Learning Problems

This work presents a novel algorithm that applies a maximum-likelihood objective function in conjunction with the expectation-maximization (EM) algorithm, and shows the potential gain of the optimized output codes over OPC or ECOC methods.

Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers

A general method for combining the classifiers generated on the binary problems is proposed, and a general empirical multiclass loss bound is proved given the empirical loss of the individual binary learning algorithms.

Multiclass boosting with repartitioning

This paper proposes a new multiclass boosting algorithm that modifies the coding matrix according to the learning ability of the base learner, and shows experimentally that this algorithm is very efficient in optimizing the multiclass margin cost, and outperforms existing multiclass algorithms such as AdaBoost.

Multiclass learning, boosting, and error-correcting codes

ECC, that, by using a different weighting of the votes of the weak hypotheses, is able to improve on the performance of ADABOOST.OC, is arguably a more direct reduction of multiclass learning to binary learning problems than previous multiclass boosting algorithms.

Adaptive Error-Correcting Output Codes

This paper reformulate the ECOC models from the perspective of multi-task learning, where the binary classifiers are learned in a common subspace of data, and presents the kernel extension of the proposed model.
...

References

SHOWING 1-10 OF 33 REFERENCES

Using Decision Trees to Improve Case-Based Learning

FUNCTION MODELING EXPERIMENTS.

The results of an experimental investigation of the capabilities and the limitations of trainable machines for use in function modeling have been presented, finding that for more difficult applications, the machine performance was sufficiently good to make the speed advantages of a training machine a significant consideration.

Why Error Correcting Output Coding Works

An empirical investigation of why the ECOC technique works particularly when employed with decision tree learning methods concludes that an important factor in the success of the method is the nearly random behavior of decision tree algorithms near the root of the decision tree when applied to learn decision boundaries.

An improved boosting algorithm and its implications on learning complexity

The main result is an improvement of the boosting-by-majority algorithm, which shows that the majority rule is the optimal rule for combining general weak learners and extends the boosting algorithm to concept classes that give multi-valued labels and real-valuedlabel.

Training Stochastic Model Recognition Algorithms as Networks can Lead to Maximum Mutual Information Estimation of Parameters

It is shown that once the output layer of a multilayer perceptron is modified to provide mathematically correct probability distributions, and the usual squared error criterion is replaced with a probability-based score, the result is equivalent to Maximum Mutual Information training.

Connectionist Learning Procedures

When Networks Disagree: Ensemble Methods for Hybrid Neural Networks

Experimental results show that the ensemble method dramatically improves neural network performance on difficult real-world optical character recognition tasks.

Neural Network Classifiers Estimate Bayesian a posteriori Probabilities

Results of Monte Carlo simulations performed using multilayer perceptron (MLP) networks trained with backpropagation, radial basis function (RBF) networks, and high-order polynomial networks graphically demonstrate that network outputs provide good estimates of Bayesian probabilities.

Classification and regression trees

An introduction to classification and regression trees is given by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples.

Backpropagation Applied to Handwritten Zip Code Recognition

This paper demonstrates how constraints from the task domain can be integrated into a backpropagation network through the architecture of the network, successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service.