Large scale training methods for linear RankRLS

Abstract

RankRLS is a recently proposed state-of-the-art method for learning ranking functions by minimizing a pairwise ranking error. The method can be trained by solving a system of linear equations. In this work, we investigate the use of conjugate gradient and regularization by iteration for linear RankRLS training on very large and high dimensional, but sparse data sets. Such data is typically encountered for example in applications where natural language based data is used. We show that even though a pairwise loss function is optimized when training RankRLS, the computational complexity of the proposed methods, when learning from data with utility scores, is O(tms), where t is the required number of iterations, m the number of training examples and s the average number of non-zero features per example. In addition, the complexity of learning from pairwise preferences is O(tms+tl), where l is the number of observed preferences in the training set. In the experiments, it is further confirmed that restricting the number of conjugate gradient iterations has a regularizing effect and that the number of iterations that provides optimal results is, in practice, a small constant. Thus, the use of regularization by iteration, while providing similar performance as the more well-known Tikhonov regularization, results in a tremendous reduction in the computational cost of training and parameter selection.

3 Figures and Tables

Cite this paper

@inproceedings{Airola2010LargeST, title={Large scale training methods for linear RankRLS}, author={Antti Airola and Tapio Pahikkala and Tapio Salakoski}, year={2010} }