Pranking with Ranking

@inproceedings{Crammer2001PrankingWR,
  title={Pranking with Ranking},
  author={Koby Crammer and Yoram Singer},
  booktitle={NIPS},
  year={2001}
}
We discuss the problem of ranking instances. In our framework each instance is associated with a rank or a rating, which is an integer from 1 to k. Our goal is to find a rank-predict ion rule that assigns each instance a rank which is as close as possible to the instance's true rank. We describe a simple and efficient online algorithm, analyze its performance in the mistake bound model, and prove its correctness. We describe two sets of experiments, with synthetic data and with the EachMovie… 
Online Ranking by Projecting
TLDR
The goal is to find a rank-prediction rule that assigns each instance a rank that is as close as possible to the instance's true rank.
Proceedings of the NIPS 2005 Workshop on Learning to Rank
In label-ranking, the goal is to learn a mapping from instances to rankings (total orders) over a fixed set of labels. Hitherto existing approaches to label-ranking implicitly operate on an
Optimization of Ranking Measures
Web page ranking requires the optimization of sophisticated performance measures. Current approaches only minimize measures indirectly related to performance scores. We present a new approach which
Direct Learning to Rank and Rerank
TLDR
It is proved that a relaxed version of the "exact" problem has the same optimal solution, and an empirical analysis is provided that shows the possibility of "exactly" reranking algorithms based on mathematical programming.
Direct optimization of ranking measures for learning to rank models
TLDR
A novel learning algorithm is presented, DirectRank, which directly and exactly optimizes ranking measures without resorting to any upper bounds or approximations, and a probabilistic framework for document-query pairs is constructed to maximize the likelihood of the objective permutation of top-$\tau$ documents.
Generalization Bounds for k-Partite Ranking
We study generalization properties of ranking algorithms in the setting of the k-partite ranking problem. In the k-partite ranking problem, one is given examples of instances labeled with one of k
Magnitude-preserving ranking algorithms
TLDR
This paper describes and analyzes several algorithms for ranking when one wishes not just to accurately predict pairwise ordering but also preserve the magnitude of the preferences or the difference between ratings, extending previously known stability results to non-bipartite ranking and magnitude of preference-preserving algorithms.
Ranking with decision tree
TLDR
A new splitting rule is presented that introduces a metric, i.e., an impurity measure, to construct decision trees for ranking tasks, which outperforms both perceptron-based ranking and the classification tree algorithms in term of accuracy as well as speed.
Direct Optimization of Ranking Measures
TLDR
Key to the approach is that during training the ranking problem can be viewed as a linear assignment problem, which can be solved by the Hungarian Marriage algorithm, and a sort operation is sufficient, as the algorithm assigns a relevance score to every document, query pair.
A fast algorithm for learning large scale preference relations
TLDR
Experiments on public benchmarks for ordinal regression and collaborative filtering show that the proposed algorithm is as accurate as the best available methods in terms of ranking accuracy, when trained on the same data, and is several orders of magnitude faster.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 15 REFERENCES
Learning to Order Things
TLDR
An on-line algorithm for learning preference functions that is based on Freund and Schapire's "Hedge" algorithm is considered, and it is shown that the problem of finding the ordering that agrees best with a learned preference function is NP-complete.
An Efficient Boosting Algorithm for Combining Preferences
TLDR
This work describes and analyze an efficient algorithm called RankBoost for combining preferences based on the boosting approach to machine learning, and gives theoretical results describing the algorithm's behavior both on the training data, and on new test data not seen during training.
Ultraconservative Online Algorithms for Multiclass Problems
TLDR
This paper studies online classification algorithms for multiclass problems in the mistake bound model and introduces the notion of ultracon-servativeness, a family of additive ultraconservative algorithms where each algorithm in the family updates its prototypes by finding a feasible solution for a set of linear constraints that depend on the instantaneous similarity-scores.
Advances in Large Margin Classifiers
TLDR
This book provides an overview of recent developments in large margin classifiers, examines connections with other methods, and identifies strengths and weaknesses of the method, as well as directions for future research.
Large Margin Classification Using the Perceptron Algorithm
TLDR
A new algorithm for linear classification which combines Rosenblatt's perceptron algorithm with Helmbold and Warmuth's leave-one-out method is introduced, which is much simpler to implement, and much more efficient in terms of computation time.
Statistical learning theory
TLDR
Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Mathematical models in the social sciences
TLDR
Republication of this book provides social science and mathematics students with a text that is the analogue of mathematical methods textbooks used in the study of the physical sciences and engineering.
Large margin rank boundaries for ordinal regression
Schapire , and Yoram Singer . Learning to order things
  • Journal of Artificial Intelligence Research
  • 1999
...
1
2
...