Learning to Order Things

@article{Cohen1997LearningTO,
  title={Learning to Order Things},
  author={William W. Cohen and Robert E. Schapire and Yoram Singer},
  journal={ArXiv},
  year={1997},
  volume={abs/1105.5464}
}
There are many applications in which it is desirable to order rather than classify instances. [...] Key Method Here we consider an on-line algorithm for learning preference functions that is based on Freund and Schapire's "Hedge" algorithm. In the second stage, new instances are ordered so as to maximize agreement with the learned preference function. We show that the problem of finding the ordering that agrees best with a learned preference function is NP-complete.Expand
Predicting Partial Orders: Ranking with Abstention
TLDR
A general approach to ranking with partial abstention is proposed as well as evaluation metrics for measuring the correctness and completeness of predictions, able to achieve a reasonable trade-off between these two criteria. Expand
Learning to rank order - a distance-based approach
TLDR
A distance-based approach to ordering, where the ordering of alternatives is predicted on the basis of their distances to a query, and it is shown that the trained distance leads in general to a higher degree of agreement than untrained distance. Expand
Learning Label Preferences: Ranking Error Versus Position Error
TLDR
A key advantage of such a decomposition, namely the fact that the learner can be adapted to different loss functions by using different ranking procedures on the same underlying order relations, is elaborated on. Expand
Pairwise Preference Learning and Ranking
TLDR
The main objective of this work is to investigate the trade-off between the quality of the induced ranking function and the computational complexity of the algorithm, both depending on the amount of preference information given for each example. Expand
On Position Error and Label Ranking through Iterated Choice
TLDR
This paper elaborates on a key advantage of such a decomposition, namely the fact that the learner can be adapted to different loss functions by using different ranking procedures on the same underlying order relations. Expand
Learning From Ordered Sets and Applications in Collaborative Ranking
TLDR
Here, a probabilistic log-linear model is constructed over a set of ordered subsets of Rank over sets that is competitive against state-of-the-art methods on large-scale collaborative filtering tasks. Expand
Learning Preference Models from Data: On the Problem of Label Ranking and Its Variants
The term \preference learning" refers to the application of machine learning methods for inducing preference models from empirical data. In the recent literature, corresponding problems ap- pear inExpand
A practical divide-and-conquer approach for preference-based learning to rank
  • Han Yang, Hsuan-Tien Lin
  • Mathematics, Computer Science
  • 2015 Conference on Technologies and Applications of Artificial Intelligence (TAAI)
  • 2015
TLDR
This work proposes a practical algorithm to speed up the ranking step while maintaining ranking accuracy, which employs a divide-and-conquer strategy that mimics merge-sort, and its time complexity is relatively low when compared to other preference-based LTR algorithms. Expand
Label ranking by learning pairwise preferences
TLDR
This work shows that a simple (weighted) voting strategy minimizes risk with respect to the well-known Spearman rank correlation and compares RPC to existing label ranking methods, which are based on scoring individual labels instead of comparing pairs of labels. Expand
Preference Learning
TLDR
The editors first offer a thorough introduction, including a systematic categorization according to learning task and learning technique, along with a unified notation, and the first half of the book is organized into parts on applications of preference learning in multiattribute domains, information retrieval, and recommender systems. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 64 REFERENCES
The Weighted Majority Algorithm
TLDR
A simple and effective method, based on weighted voting, is introduced for constructing a compound algorithm, which is robust in the presence of errors in the data, and is called the Weighted Majority Algorithm. Expand
Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm
  • N. Littlestone
  • Mathematics
  • 28th Annual Symposium on Foundations of Computer Science (sfcs 1987)
  • 1987
Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting inExpand
A decision-theoretic generalization of on-line learning and an application to boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. Expand
Approximation Algorithms for NP-Hard Problems
TLDR
This book reviews the design techniques for approximation algorithms and the developments in this area since its inception about three decades ago and the "closeness" to optimum that is achievable in polynomial time. Expand
Learning a Preference Predicate
TLDR
It is observed that a numeric evaluation function often provides only an indirect mechanism for evaluating the relative preference of states, and a more general formulation is suggested for evaluating preferences that is not limited to comparison of numeric estimates. Expand
Using the Future to Sort Out the Present: Rankprop and Multitask Learning for Medical Risk Evaluation
TLDR
Two methods that together improve the accuracy of backprop nets on a pneumonia risk assessment problem by 10-50%. Expand
Fab: content-based, collaborative recommendation
TLDR
It is explained how a hybrid system can incorporate the advantages of both methods while inheriting the disadvantages of neither, and how the particular design of the Fab architecture brings two additional benefits. Expand
Recommender systems
TLDR
This special section includes descriptions of five recommender systems, which provide recommendations as inputs, which the system then aggregates and directs to appropriate recipients, and which combine evaluations with content analysis. Expand
A Machine Learning Architecture for Optimizing Web Search Engines
TLDR
A wide range of heuristics for adjusting document rankings based on the special HTML structure of Web documents are described, including a novel one inspired by reinforcement learning techniques for propagating rewards through a graph which can be used to improve a search engine's rankings. Expand
Measuring retrieval effectiveness based on user preference of documents
TLDR
A new measure of system performance is suggested based on the distance between user ranking and system ranking that only uses the relative order of documents and therefore confirms to the valid use of an ordinal scale measuring relevance. Expand
...
1
2
3
4
5
...