• Corpus ID: 2724983

Constraint Classification for Multiclass Classification and Ranking

  title={Constraint Classification for Multiclass Classification and Ranking},
  author={Sariel Har-Peled and Dan Roth and Dav Zimak},
The constraint classification framework captures many flavors of multiclass classification including winner-take-all multiclass classification, multilabel classification and ranking. We present a meta-algorithm for learning in this framework that learns via a single linear classifier in high dimension. We discuss distribution independent as well as margin-based generalization bounds and present empirical and theoretical evidence showing that constraint classification benefits over existing… 

Figures and Tables from this paper

Better multiclass classification via a margin-optimized single binary problem

Decision tree and instance-based learning for label ranking

New methods for label ranking are introduced that complement and improve upon existing approaches and are extensions of two methods that have been used extensively for classification and regression so far, namely instance-based learning and decision tree induction.

A Simple Instance-Based Approach to Multilabel Classification Using the Mallows Model

This paper proposes a new instance-based approach to multilabel classification, which is based on calibrated label ranking, a recently proposed framework that unifies multILabel classification and label ranking.

Algorithms and analysis for multi-category classification

This dissertation provides a learning framework that provides a unified view of complex-output problems, and introduces a new algorithm for learning maximum margin classifiers using coresets to find provably approximate solution to maximum margin linear separating hyperplane.

A preference model for structured supervised learning tasks

  • F. Aiolli
  • Computer Science
    Fifth IEEE International Conference on Data Mining (ICDM'05)
  • 2005
The preference model introduced in this paper gives a natural framework and a principled solution for a broad class of supervised learning problems with structured predictions, such as predicting

Label Ranking Algorithms: A Survey

This paper surveys the algorithms used in label ranking, a complex prediction task where the goal is to map instances to a total order over a finite set of predefined labels.

Supervised Learning as Preference Optimization : A General Framework and its Applications

The general preference learning model (GPLM), which is based on a large-margin principled approach, gives a flexible way to codify cost functions for all the above problems as sets of linear preferences.

A Preference Optimization Based Unifying Framework for Supervised Learning Problems

This chapter proposes a general preference learning model (GPLM), which gives an easy way to translate any supervised learning problem and the associated cost functions into sets of preferences to learn from.

Alternative Decomposition Techniques for Label Ranking

This paper discusses and proposes alternative reduction techniques that decompose the original problem into binary classification related to pairs of labels and that can take into account label correlation during the learning process.

Preference Learning - ReadingSample

This chapter proposes a general preference learning model (GPLM), which gives an easy way to translate any supervised learning problem and the associated cost functions into sets of preferences to learn from.



Constraint Classification: A New Approach to Multiclass Classification

This paper provides the first optimal, distribution independent bounds for many multiclass learning algorithms, including winner-take-all (WTA), and presents a learning algorithm that learns via a single linear classifier in high dimension.

A Sequential Model for Multi-Class Classification

A sequential learning model is suggested that utilizes classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidates set.

Classification by Pairwise Coupling

A strategy for polychotomous classification that involves estimating class probabilities for each pair of classes, and then coupling the estimates together is discussed, similar to the Bradley-Terry method for paired comparisons.

Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers

A general method for combining the classifiers generated on the binary problems is proposed, and a general empirical multiclass loss bound is proved given the empirical loss of the individual binary learning algorithms.

Solving Multiclass Learning Problems via Error-Correcting Output Codes

It is demonstrated that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems.

Ultraconservative Online Algorithms for Multiclass Problems

This paper studies online classification algorithms for multiclass problems in the mistake bound model and introduces the notion of ultracon-servativeness, a family of additive ultraconservative algorithms where each algorithm in the family updates its prototypes by finding a feasible solution for a set of linear constraints that depend on the instantaneous similarity-scores.

Support vector machines for multi-class pattern recognition

A formulation of the SVM is proposed that enables a multi-class pattern recognition problem to be solved in a single optimisation and a similar generalization of linear programming machines is proposed.

On the Learnability and Design of Output Codes for Multiclass Problems

This paper discusses for the first time the problem of designing output codes for multiclass problems, and gives a time and space efficient algorithm for solving the quadratic program.

Mistake-Driven Learning in Text Categorization

This work studies three mistake-driven learning algorithms for a typical task of this nature -- text categorization and presents an algorithm, a variation of Littlestone's Winnow, which performs significantly better than any other algorithm tested on this task using a similar feature set.

Characterizations of Learnability for Classes of {0, ..., n}-Valued Functions

A general scheme for extending the VC-dimension to the case n > 1 is presented, which defines a wide variety of notions of dimension in which all these variants of theVC-dimension, previously introduced in the context of learning, appear as special cases.