Pairwise Preference Learning and Ranking

@inproceedings{Frnkranz2003PairwisePL,
  title={Pairwise Preference Learning and Ranking},
  author={Johannes F{\"u}rnkranz and Eyke H{\"u}llermeier},
  booktitle={ECML},
  year={2003}
}
We consider supervised learning of a ranking function, which is a mapping from instances to total orders over a set of labels (options). The training information consists of examples with partial (and possibly inconsistent) information about their associated rankings. From these, we induce a ranking function by reducing the original problem to a number of binary classification problems, one for each pair of labels. The main objective of this work is to investigate the trade-off between the… 
Learning Label Preferences: Ranking Error Versus Position Error
TLDR
A key advantage of such a decomposition, namely the fact that the learner can be adapted to different loss functions by using different ranking procedures on the same underlying order relations, is elaborated on.
Label ranking by learning pairwise preferences
Ranking by pairwise comparison a note on risk minimization
TLDR
A potential application of the ranking by pairwise comparison method in (qualitative) fuzzy classification is outlined by outlining a potential application and identifying some extensions necessary in this context.
On Position Error and Label Ranking through Iterated Choice
TLDR
This paper elaborates on a key advantage of such a decomposition, namely the fact that the learner can be adapted to different loss functions by using different ranking procedures on the same underlying order relations.
On Loss Functions in Label Ranking and Risk Minimization by Pairwise Learning
TLDR
A ranking procedure called ranking through iterated choice as well as an efficient pairwise implementation thereof is proposed, and empirical evidence is offered in favor of its superior performance as a risk minimizer for the position error.
Learning Preference Models from Data: On the Problem of Label Ranking and Its Variants
TLDR
This paper elaborates on a key advantage of such an approach, namely the fact that the learner can be adapted to different loss functions by using different ranking procedures on the same underlying order relations.
A Reduction of Label Ranking to Multiclass Classification
TLDR
This paper presents a framework for label ranking using a decomposition into a set of multiclass problems, and discusses theoretical properties of the proposed method in terms of accuracy, error correction, and computational complexity.
Preference Learning and Ranking by Pairwise Comparison
TLDR
This chapter provides an overview of recent work on preference learning and ranking via pairwise classification and explains how to approach different preference learning problems within the framework of LPC.
Comparison of Ranking Procedures in Pairwise Preference Learning
TLDR
A method for learning valued preference structures, using a natural extension of so-called pairwise classification, which can be used in order to induce a ranking, that is a linear ordering of a given set of alternatives.
Preference Learning Using the Choquet Integral: The Case of Multipartite Ranking
We propose a novel method for preference learning or, more specifically, learning to rank, where the task is to learn a ranking model that takes a subset of alternatives as input and produces a
...
...

References

SHOWING 1-10 OF 46 REFERENCES
Learning to Order Things
TLDR
An on-line algorithm for learning preference functions that is based on Freund and Schapire's "Hedge" algorithm is considered, and it is shown that the problem of finding the ordering that agrees best with a learned preference function is NP-complete.
Ranking Learning Algorithms: Using IBL and Meta-Learning on Accuracy and Time Results
TLDR
A meta-learning method that uses a k-Nearest Neighbor algorithm to identify the datasets that are most similar to the one at hand and leads to significantly better rankings than the baseline ranking method.
Connectionist Learning of Expert Preferences by Comparison Training
A new training paradigm, called the "comparison paradigm," is introduced for tasks in which a network must learn to choose a preferred pattern from a set of n alternatives, based on examples of human
Empirical Analysis of Predictive Algorithms for Collaborative Filtering
TLDR
Several algorithms designed for collaborative filtering or recommender systems are described, including techniques based on correlation coefficients, vector-based similarity calculations, and statistical Bayesian methods, to compare the predictive accuracy of the various methods in a set of representative problem domains.
A Family of Additive Online Algorithms for Category Ranking
TLDR
A new family of topic- ranking algorithms for multi-labeled documents that achieve state-of-the-art results and outperforms topic-ranking adaptations of Rocchio's algorithm and of the Perceptron algorithm are described.
Similarity of personal preferences: Theoretical foundations and empirical analysis
Combining Pairwise Classifiers with Stacking
TLDR
This paper tries to generalize the voting procedure by replacing it with a trainable classifier, i.e., the use of a meta-level classifier that is trained to arbiter among the conflicting predictions of the binary classifiers.
Utility Elicitation as a Classification Problem
TLDR
This work attempts to identify the new user's utility function based on classification relative to a database of previously collected utility functions by identifying clusters of utility functions that minimize an appropriate distance measure.
Preference Elicitation via Theory Refinement
We present an approach to elicitation of user preference models in which assumptions can be used to guide but not constrain the elicitation process. We demonstrate that when domain knowledge is
Round Robin Classification
TLDR
An empirical evaluation of round robin classification, implemented as a wrapper around the Ripper rule learning algorithm, on 20 multi-class datasets from the UCI database repository shows that the technique is very likely to improve Ripper's classification accuracy without having a high risk of decreasing it.
...
...