# Learning to Order Things

@article{Cohen1997LearningTO, title={Learning to Order Things}, author={William W. Cohen and Robert E. Schapire and Yoram Singer}, journal={ArXiv}, year={1997}, volume={abs/1105.5464} }

There are many applications in which it is desirable to order rather than classify instances. [] Key Method Here we consider an on-line algorithm for learning preference functions that is based on Freund and Schapire's "Hedge" algorithm. In the second stage, new instances are ordered so as to maximize agreement with the learned preference function. We show that the problem of finding the ordering that agrees best with a learned preference function is NP-complete.

## 990 Citations

### Predicting Partial Orders: Ranking with Abstention

- Computer ScienceECML/PKDD
- 2010

A general approach to ranking with partial abstention is proposed as well as evaluation metrics for measuring the correctness and completeness of predictions, able to achieve a reasonable trade-off between these two criteria.

### Learning to rank order - a distance-based approach

- Computer ScienceSGAI Conf.
- 2008

A distance-based approach to ordering, where the ordering of alternatives is predicted on the basis of their distances to a query, and it is shown that the trained distance leads in general to a higher degree of agreement than untrained distance.

### Pairwise Preference Learning and Ranking

- Computer ScienceECML
- 2003

The main objective of this work is to investigate the trade-off between the quality of the induced ranking function and the computational complexity of the algorithm, both depending on the amount of preference information given for each example.

### Learning Label Preferences: Ranking Error Versus Position Error

- Computer ScienceIDA
- 2005

A key advantage of such a decomposition, namely the fact that the learner can be adapted to different loss functions by using different ranking procedures on the same underlying order relations, is elaborated on.

### On Position Error and Label Ranking through Iterated Choice

- Computer ScienceLWA
- 2005

This paper elaborates on a key advantage of such a decomposition, namely the fact that the learner can be adapted to different loss functions by using different ranking procedures on the same underlying order relations.

### Learning From Ordered Sets and Applications in Collaborative Ranking

- Computer ScienceACML
- 2012

Here, a probabilistic log-linear model is constructed over a set of ordered subsets of Rank over sets that is competitive against state-of-the-art methods on large-scale collaborative filtering tasks.

### Learning Preference Models from Data: On the Problem of Label Ranking and Its Variants

- Computer Science
- 2005

This paper elaborates on a key advantage of such an approach, namely the fact that the learner can be adapted to loss functions by using dieren t ranking procedures on the same underlying order relations.

### A practical divide-and-conquer approach for preference-based learning to rank

- Computer Science2015 Conference on Technologies and Applications of Artificial Intelligence (TAAI)
- 2015

This work proposes a practical algorithm to speed up the ranking step while maintaining ranking accuracy, which employs a divide-and-conquer strategy that mimics merge-sort, and its time complexity is relatively low when compared to other preference-based LTR algorithms.

### Label ranking by learning pairwise preferences

- Computer ScienceArtif. Intell.
- 2008

### Learning to Rank based on Analogical Reasoning

- Computer ScienceAAAI
- 2018

This paper proposes a new approach to object ranking based on principles of analogical reasoning and applies this pattern as a main building block and combines it with ideas and techniques from instance-based learning and rank aggregation.

## References

SHOWING 1-10 OF 62 REFERENCES

### An Efficient Boosting Algorithm for Combining Preferences

- Computer ScienceJ. Mach. Learn. Res.
- 1998

This work describes and analyze an efficient algorithm called RankBoost for combining preferences based on the boosting approach to machine learning, and gives theoretical results describing the algorithm's behavior both on the training data, and on new test data not seen during training.

### Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm

- Computer Science28th Annual Symposium on Foundations of Computer Science (sfcs 1987)
- 1987

This work presents one such algorithm that learns disjunctive Boolean functions, along with variants for learning other classes of Boolean functions.

### A decision-theoretic generalization of on-line learning and an application to boosting

- Computer ScienceEuroCOLT
- 1995

The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.

### Approximation Algorithms for NP-Hard Problems

- Computer ScienceSIGA
- 1997

This book reviews the design techniques for approximation algorithms and the developments in this area since its inception about three decades ago and the "closeness" to optimum that is achievable in polynomial time.

### The weighted majority algorithm

- Computer Science30th Annual Symposium on Foundations of Computer Science
- 1989

A simple and effective method, based on weighted voting, is introduced for constructing a compound algorithm in a situation in which a learner faces a sequence of trials, and the goal of the learner is to make few mistakes.

### Using the Future to Sort Out the Present: Rankprop and Multitask Learning for Medical Risk Evaluation

- MedicineNIPS
- 1995

Two methods that together improve the accuracy of backprop nets on a pneumonia risk assessment problem by 10-50%.

### Fab: content-based, collaborative recommendation

- Computer ScienceCACM
- 1997

It is explained how a hybrid system can incorporate the advantages of both methods while inheriting the disadvantages of neither, and how the particular design of the Fab architecture brings two additional benefits.

### Recommender systems

- Computer ScienceCACM
- 1997

This special section includes descriptions of five recommender systems, which provide recommendations as inputs, which the system then aggregates and directs to appropriate recipients, and which combine evaluations with content analysis.

### A Machine Learning Architecture for Optimizing Web Search Engines

- Computer Science
- 1999

A wide range of heuristics for adjusting document rankings based on the special HTML structure of Web documents are described, including a novel one inspired by reinforcement learning techniques for propagating rewards through a graph which can be used to improve a search engine's rankings.