#### Filter Results:

- Full text PDF available (57)

#### Publication Year

1987

2017

- This year (3)
- Last 5 years (11)
- Last 10 years (45)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Brain Region

#### Cell Type

#### Data Set Used

#### Key Phrases

Learn More

Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence… (More)

- Neil D. Lawrence, Matthias W. Seeger, Ralf Herbrich
- NIPS
- 2002

We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on informationtheoretic principles, previously suggested for active learning. Our goal is not only to learn d–sparse predictors (which can be evaluated in O(d) rather than O(n), d n, n the number of training points), but also to perform training… (More)

- Matthias W. Seeger, Christopher K. I. Williams, Neil D. Lawrence
- AISTATS
- 2003

We present a method for the sparse greedy approximation of Bayesian Gaussian process regression, featuring a novel heuristic for very fast forward selection. Our method is essentially as fast as an equivalent one which selects the “support” patterns at random, yet it can outperform random selection on hard curve fitting tasks. More importantly, it leads to… (More)

- Matthias W. Seeger
- Journal of Machine Learning Research
- 2008

The linear model with sparsity-favouring prior on the coefficients has important applications in many different domains. In machine learning, most methods to date search for maximum a posteriori sparse solutions and neglect to represent posterior uncertainties. In this paper, we address problems of Bayesian optimal design (or experiment planning), for which… (More)

- Niranjan Srinivas, Andreas Krause, Sham M. Kakade, Matthias W. Seeger
- IEEE Transactions on Information Theory
- 2012

Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low norm in a reproducing kernel Hilbert space. We resolve the important open problem of deriving regret bounds for this setting,… (More)

- Matthias W. Seeger
- Int. J. Neural Syst.
- 2004

Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a… (More)

- Matthias W. Seeger
- NIPS
- 1999

We present a variational Bayesian method for model selection over families of kernels classifiers like Support Vector machines or Gaussian processes. The algorithm needs no user interaction and is able to adapt a large number of kernel parameters to given data without having to sacrifice training cases for validation. This opens the possibility to use… (More)

- Yirong Shen, Andrew Y. Ng, Matthias W. Seeger
- NIPS
- 2005

The computation required for Gaussian process regression with n training examples is about O(n) during training and O(n) for each prediction. This makes Gaussian process regression too slow for large datasets. In this paper, we present a fast approximation method, based on kd-trees, that significantly reduces both the prediction and the training times of… (More)