McRank: Learning to Rank Using Multiple Classification and Gradient Boosting


Abstract We cast the ranking problem as (1) multiple classification (“Mc”) (2) multiple ordinal classification, which lead to computationally tractable learning algorithms for relevance ranking in Web search. We consider the DCG criterion (discounted cumulative gain), a standard quality measure in information retrieval. Our approach is motivated by the fact that perfect classifications result in perfect DCG scores and the DCG errors are bounded by classification errors. We propose using the Expected Relevance to convert class probabilities into ranking scores. The class probabilities are learned using a gradient boosting tree algorithm. Evaluations on large-scale datasets show that our approach can improve LambdaRank [5] and the regressions-based ranker [6], in terms of the (normalized) DCG scores. An efficient implementation of the boosting tree algorithm is also presented.

Extracted Key Phrases

4 Figures and Tables

Citations per Year

317 Citations

Semantic Scholar estimates that this publication has 317 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Li2007McRankLT, title={McRank: Learning to Rank Using Multiple Classification and Gradient Boosting}, author={Ping Li and Christopher J. C. Burges and Qiang Wu}, booktitle={NIPS}, year={2007} }