#### Filter Results:

- Full text PDF available (91)

#### Publication Year

1993

2017

- This year (4)
- Last 5 years (24)
- Last 10 years (51)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Brain Region

#### Key Phrases

#### Method

Learn More

- Carl E. Rasmussen, Christopher K. I. Williams
- Adaptive computation and machine learning
- 2009

Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received growing attention in the machine learning community over the past decade. The book provides a long-needed, systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is… (More)

- Joaquin Quiñonero Candela, Carl E. Rasmussen
- Journal of Machine Learning Research
- 2005

We provide a new unifying view, including all existing proper probabilistic sparse approximations for Gaussian process regression. Our approach relies on expressing the effective prior which the methods are using. This allows new insights to be gained, and highlights the relationship between existing methods. It also allows for a clear theoretically… (More)

- Carl E. Rasmussen
- NIPS
- 1999

In a Bayesian mixture model it is not necessary a priori to limit the number of components to be finite. In this paper an infinite Gaussian mixture model is presented which neatly sidesteps the difficult problem of finding the “right” number of mixture components. Inference in the model is done using an efficient parameter-free Markov Chain that relies… (More)

This thesis develops two Bayesian learning methods relying on Gaussian processes and a rigorous statistical approach for evaluating such methods. In these experimental designs the sources of uncertainty in the estimated generalisation performances due to both variation in training and test sets are accounted for. The framework allows for estimation of… (More)

We show that it is possible to extend hidden Markov models to have a countably infinite number of hidden states. By using the theory of Dirichlet processes we can implicitly integrate out the infinitely many transition parameters, leaving only three hyperparameters which can be learned from data. These three hyperparameters define a hierarchical Dirichlet… (More)

Two features distinguish the Bayesian approach to learning models from data. First, beliefs derived from background knowledge are used to select a prior probability distribution for the model parameters. Second, predictions of future observations are made by integrating the model's predictions with respect to the posterior parameter distribution obtained by… (More)

The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions . In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations.… (More)

- Marc Peter Deisenroth, Carl E. Rasmussen
- ICML
- 2011

In this paper, we introduce pilco, a practical, data-efficient model-based policy search method. Pilco reduces model bias, one of the key problems of model-based reinforcement learning, in a principled way. By learning a probabilistic dynamics model and explicitly incorporating model uncertainty into long-term planning, pilco can cope with very little data… (More)

- Miguel Lázaro-Gredilla, Joaquin Quiñonero Candela, Carl E. Rasmussen, Aníbal R. Figueiras-Vidal
- Journal of Machine Learning Research
- 2010

We present a new sparse Gaussian Process (GP) model for regression. The key novel idea is to sparsify the spectral representation of the GP. This leads to a simple, practical algorithm for regression tasks. We compare the achievable trade-offs between predictive accuracy and computational requirements, and show that these are typically superior to existing… (More)

- Malte Kuss, Carl E. Rasmussen
- Journal of Machine Learning Research
- 2005

Gaussian process priors can be used to define flexible, probabilistic classification models. Unfortunately exact Bayesian inference is analytically intractable and various approximation techniques have been proposed. In this work we review and compare Laplace’s method and Expectation Propagation for approximate Bayesian inference in the binary Gaussian… (More)