#### Filter Results:

- Full text PDF available (152)

#### Publication Year

1958

2017

- This year (4)
- Last 5 years (28)
- Last 10 years (62)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Carl E. Rasmussen, Christopher K. I. Williams
- Adaptive computation and machine learning
- 2009

Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received growing attention in the machine learning community over the past decade. The book provides a long-needed, systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is… (More)

- Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John M. Winn, Andrew Zisserman
- International Journal of Computer Vision
- 2009

The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted… (More)

- Christopher M. Bishop, Markus Svensén, Christopher K. I. Williams
- Neural Computation
- 1998

Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis, which is based on a linear transformation between the latent space and the data space. In this article, we introduce a form of nonlinear latent variable model… (More)

- Mark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams, John M. Winn, Andrew Zisserman
- International Journal of Computer Vision
- 2014

The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this… (More)

In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a “free-form” covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for… (More)

- Christopher K. I. Williams, David Barber
- IEEE Trans. Pattern Anal. Mach. Intell.
- 1998

We consider the problem of assigning an input vector to one of m classes by predicting P(c|x) for c = 1, o, m. For a twoclass problem, the probability of class one given x is estimated by s(y(x)), where s(y) = 1/(1 + ey ). A Gaussian process prior is placed on y(x), and is combined with the training data to obtain predictions for new x points. We provide a… (More)

The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions . In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations.… (More)

The main aim of this paper is to provide a tutorial on regression with Gaussian processes We start from Bayesian linear regression and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions rather than on priors over parameters This leads in to a more general discussion of Gaussian processes… (More)

- Matthias W. Seeger, Christopher K. I. Williams, Neil D. Lawrence
- AISTATS
- 2003

We present a method for the sparse greedy approximation of Bayesian Gaussian process regression, featuring a novel heuristic for very fast forward selection. Our method is essentially as fast as an equivalent one which selects the “support” patterns at random, yet it can outperform random selection on hard curve fitting tasks. More importantly, it leads to… (More)