#### Filter Results:

- Full text PDF available (73)

#### Publication Year

1987

2016

- This year (0)
- Last 5 years (15)
- Last 10 years (34)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Christopher J. C. Burges
- Data Mining and Knowledge Discovery
- 1998

The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe… (More)

- Christopher J. C. Burges, Tal Shaked, +4 authors Gregory N. Hullender
- ICML
- 2005

We investigate using gradient descent methods for learning ranking functions; we propose a simple probabilistic cost function, and we introduce RankNet, an implementation of these ideas using a neural network to model the underlying ranking function. We present test results on toy data and on data from a commercial internet search engine.

A new regression technique based on Vapnik's concept of support vectors is introduced. We compare support vector regression (SVR) with a committee regression technique (bagging) based on regression trees and ridge regression done in feature space. On the basis of these experiments, it is expected that SVR will have advantages in high dimensionality space… (More)

- Bernhard Schölkopf, Sebastian Mika, +4 authors Alexander J. Smola
- IEEE Trans. Neural Networks
- 1999

This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods.… (More)

LambdaMART is the boosted tree version of LambdaRank, which is based on RankNet. RankNet, LambdaRank, and LambdaMART have proven to be very successful algorithms for solving real world ranking problems: for example an ensemble of LambdaMART rankers won Track 1 of the 2010 Yahoo! Learning To Rank Challenge. The details of these algorithms are spread across… (More)

- Christopher J. C. Burges, Robert Ragno, Quoc V. Le
- NIPS
- 2006

The quality measures used in information retrieval are particularly difficult to optimize directly, since they depend on the model scores only through the sorted order of the documents returned for a given query. Thus, the derivatives of the cost with respect to the model parameters are either zero, or are undefined. In this paper, we propose a class of… (More)

- Christopher J. C. Burges
- ICML
- 1996

A Support Vector Machine (SVM) is a universal learning machine whose decision surface is parameterized by a set of support vectors , and by a set of corresponding weights. An SVM is also characterized by a kernel function. Choice of the kernel determines whether the resulting SVM is a polynomial classiier, a two-layer neural network, a radial basis function… (More)

The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe… (More)

We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited domain datasets, or on solving a more restricted goal (e.g., open-domain relation… (More)

- Qiang Wu, Christopher J. C. Burges, Krysta Marie Svore, Jianfeng Gao
- Information Retrieval
- 2009

We present a new ranking algorithm that combines the strengths of two previous methods: boosted tree classification, and LambdaRank, which has been shown to be empirically optimal for a widely used information retrieval measure. Our algorithm is based on boosted regression trees, although the ideas apply to any weak learners, and it is significantly faster… (More)