Learning to rank with (a lot of) word features

Abstract

In this article we present Supervised Semantic Indexing which defines a class of nonlinear (quadratic) models that are discriminatively trained to directly map from the word content in a query-document or document-document pair to a ranking score. Like Latent Semantic Indexing (LSI), our models take account of correlations between words (synonymy, polysemy). However, unlike LSI our models are trained from a supervised signal directly on the ranking task of interest, which we argue is the reason for our superior results. As the query and target texts are modeled separately, our approach is easily generalized to different retrieval tasks, such as cross-language retrieval or online advertising placement. Dealing with models on all pairs of words features is computationally challenging. We propose several improvements to our basic model for addressing this issue, including low rank (but diagonal preserving) representations, correlated feature hashing and sparsification. We provide an empirical study of all these methods on retrieval tasks based on Wikipedia documents as well as an Internet advertisement task. We obtain state-of-the-art performance while providing realistically scalable methods.

DOI: 10.1007/s10791-009-9117-9

Extracted Key Phrases

12 Figures and Tables

0102020102011201220132014201520162017
Citations per Year

91 Citations

Semantic Scholar estimates that this publication has 91 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Bai2009LearningTR, title={Learning to rank with (a lot of) word features}, author={Bing Bai and Jason Weston and David Grangier and Ronan Collobert and Kunihiko Sadamasa and Yanjun Qi and Olivier Chapelle and Kilian Q. Weinberger}, journal={Information Retrieval}, year={2009}, volume={13}, pages={291-314} }