• Publications
  • Influence
Support Vector Regression Machines
This work compares support vector regression (SVR) with a committee regression technique (bagging) based on regression trees and ridge regression done in feature space and expects that SVR will have advantages in high dimensionality space because SVR optimization does not depend on the dimensionality of the input space. Expand
Learning to rank using gradient descent
RankNet is introduced, an implementation of these ideas using a neural network to model the underlying ranking function, and test results on toy data and on data from a commercial internet search engine are presented. Expand
Advances in kernel methods: support vector learning
Introduction to support vector learning roadmap. Part 1 Theory: three remarks on the support vector method of function estimation, Vladimir Vapnik generalization performance of support vectorExpand
From RankNet to LambdaRank to LambdaMART: An Overview
RankNet, LambdaRank, and LambdaMART have proven to be very successful algorithms for solving real world ranking problems and the details are spread across several papers and reports, so here is a self-contained, detailed and complete description of them. Expand
A Tutorial on Support Vector Machines for Pattern Recognition
  • C. Burges
  • Mathematics, Computer Science
  • Data Mining and Knowledge Discovery
  • 1 June 1998
There are several arguments which support the observed high accuracy of SVMs, which are reviewed and numerous examples and proofs of most of the key theorems are given. Expand
Input space versus feature space in kernel-based methods
The geometry of feature space is reviewed, and the connection between feature space and input space is discussed by dealing with the question of how one can, given some vector in feature space, find a preimage in input space. Expand
MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text
MCTest is presented, a freely available set of stories and associated questions intended for research on the machine comprehension of text that requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Expand
Adapting boosting for information retrieval measures
This work presents a new ranking algorithm that combines the strengths of two previous methods: boosted tree classification, and LambdaRank, and shows how to find the optimal linear combination for any two rankers, and uses this method to solve the line search problem exactly during boosting. Expand
Learning to Rank with Nonsmooth Cost Functions
A class of simple, flexible algorithms, called LambdaRank, which avoids difficulties by working with implicit cost functions by using neural network models, and can be extended to any non-smooth and multivariate cost functions. Expand
Comparing support vector machines with Gaussian kernels to radial basis function classifiers
The results show that on the United States postal service database of handwritten digits, the SV machine achieves the highest recognition accuracy, followed by the hybrid system, and the SV approach is thus not only theoretically well-founded but also superior in a practical application. Expand