• Corpus ID: 1791179

A Sequential Model for Multi-Class Classification

  title={A Sequential Model for Multi-Class Classification},
  author={Yair Even-Zohar and Dan Roth},
Many classification problems require decisions among a large number of competing classes. These tasks, however, are not handled well by general purpose learning methods and are usually addressed in an ad-hoc fashion. We suggest a general approach -- a sequential learning model that utilizes classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidates set. Some theoretical and computational… 

Figures and Tables from this paper

Sequential Automatic Search of a Subset of Classifiers in Multiclass Learning

A method called Sequential Automatic Search of a Subset of Classifiers, which utilizes classifiers in a sequential way by restricting the number of competing classes while maintaining the presence of the true (class) outcome in the candidate set of classes.

Sequential Dynamic Classification for Large Scale Multiclass Problems

A novel ensemble learning approach, where classifiers are dynamically chosen among a pre-trained set of classifiers and are iteratively combined in order to achieve an efficient trade-off between inference complexity and classification accuracy.

Algorithms and analysis for multi-category classification

This dissertation provides a learning framework that provides a unified view of complex-output problems, and introduces a new algorithm for learning maximum margin classifiers using coresets to find provably approximate solution to maximum margin linear separating hyperplane.

Learning Question Classifiers

A hierarchical classifier is learned that is guided by a layered semantic hierarchy of answer types, and eventually classifies questions into fine-grained classes.

Constraint Classification for Multiclass Classification and Ranking

A meta-algorithm for learning in this framework that learns via a single linear classifier in high dimension is presented and empirical and theoretical evidence showing that constraint classification benefits over existing methods of multiclass classification is presented.

Learning question classifiers: the role of semantic information

It is shown that, in the context of question classification, augmenting the input of the classifier with appropriate semantic category information results in significant improvements to classification accuracy.

Revision Learning and its Application to Part-of-Speech Tagging

This paper uses a high capacity model to revise the output of a small cost model and applies this method to English part-of-speech tagging and Japanese morphological analysis, and shows that the method performs well.

The Role of Semantic Information in Learning Question Classifiers

It is shown that, in the context of question classification, augmenting the input of the classifier with appropriate semantic category information results in significant improvements to classification accuracy.

Multiclass pattern classification using neural networks

  • G. OuY. MurpheyL. Feldkamp
  • Computer Science
    Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.
  • 2004
This work discusses major approaches used in neural networks for classifying multiple classes using either a system of multiple neural networks or a single neural network, and discusses various learning algorithms, including one-again-all, one- against-one, and p-against-q.

Multi-class Prediction Using Stochastic Logic Programs

This paper demonstrates that PILP approaches (eg. SLPs) have advantages for solving multi-class prediction problems with the help of learned probabilities and shows that SLPs outperform ILP plus majority class predictor in both predictive accuracy and result interpretability.



The Use of Classifiers in Sequential Inference

A Markovian approach is developed that extends standard HMMs to allow the use of a rich observation structure and of general classifiers to model state-observation dependencies and an extension of constraint satisfaction formalisms are extended.

Part of Speech Tagging Using a Network of Linear Separators

An architecture and an on-line learning algorithm are presented that utilizes this mistake-driven algorithm for multi-class prediction-selecting the part of speech of a word and it is shown that the algorithm performs comparably to the best known algorithms for POS.

Classification by Pairwise Coupling

A strategy for polychotomous classification that involves estimating class probabilities for each pair of classes, and then coupling the estimates together is discussed, similar to the Bradley-Terry method for paired comparisons.

Learning to Resolve Natural Language Ambiguities: A Unified Approach

  • D. Roth
  • Computer Science
  • 1998
An extensive experimental comparison of the approach with other methods on several well studied lexical disambiguation tasks such as context-sensitive spelling correction, prepositional phrase attachment and part of speech tagging shows that it outperforms other methods tried for these tasks or performs comparably to the best.

Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers

A general method for combining the classifiers generated on the binary problems is proposed, and a general empirical multiclass loss bound is proved given the empirical loss of the individual binary learning algorithms.

Learning in Natural Language

A coherent view of when and why learning approaches work in this context may help to develop better learning methods and an understanding of the role of learning in natural language inferences.

A Bayesian Hybrid Method for Context-sensitive Spelling Correction

This paper takes Yarowsky's work as a starting point, applying decision lists to the problem of context-sensitive spelling correction, and finds that further improvements can be obtained by taking into account not just the single strongest piece of evidence, but ALL the available evidence.

Solving Multiclass Learning Problems via Error-Correcting Output Codes

It is demonstrated that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems.

Training Products of Experts by Minimizing Contrastive Divergence

A product of experts (PoE) is an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary because it is hard even to approximate the derivatives of the renormalization term in the combination rule.

Machine learning

Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.