• Publications
  • Influence
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
TLDR
This work analyzes GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design and obtaining explicit sublinear regret bounds for many commonly used covariance functions.
Using the Nyström Method to Speed Up Kernel Machines
TLDR
It is shown that an approximation to the eigendecomposition of the Gram matrix can be computed by the Nystrom method (which is used for the numerical solution of eigenproblems) and the computational complexity of a predictor using this approximation is O(m2n).
Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting
TLDR
This work analyzes an intuitive Gaussian process upper confidence bound algorithm, and bound its cumulative regret in terms of maximal in- formation gain, establishing a novel connection between GP optimization and experimental design and obtaining explicit sublinear regret bounds for many commonly used covariance functions.
Fast Sparse Gaussian Process Methods: The Informative Vector Machine
TLDR
A framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretic principles, which allows for Bayesian model selection and is less complex in implementation is presented.
Fast Forward Selection to Speed Up Sparse Gaussian Process Regression
TLDR
A method for the sparse greedy approximation of Bayesian Gaussian process regression, featuring a novel heuristic for very fast forward selection, which leads to a sufficiently stable approximation of the log marginal likelihood of the training data, which can be optimised to adjust a large number of hyperparameters automatically.
Semiparametric latent factor models
We propose a semiparametric model for regression and classification problems involving multiple response variables. The model makes use of a set of Gaussian processes to model the relationship to the
Bayesian Inference and Optimal Design in the Sparse Linear Model
TLDR
This work shows how to obtain a good approximation to Bayesian analysis efficiently, using the Expectation Propagation method, and addresses the problems of optimal design and hyperparameter estimation.
PAC-Bayesian Generalisation Error Bounds for Gaussian Process Classification
  • M. Seeger
  • Computer Science
    J. Mach. Learn. Res.
  • 1 March 2003
TLDR
By applying the PAC-Bayesian theorem of McAllester (1999a), this paper proves distribution-free generalisation error bounds for a wide range of approximate Bayesian GP classification techniques, giving a strong learning-theoretical justification for the use of these techniques.
Deep State Space Models for Time Series Forecasting
TLDR
A novel approach to probabilistic time series forecasting that combines state space models with deep learning by parametrizing a per-time-series linear state space model with a jointly-learned recurrent neural network, which compares favorably to the state-of-the-art.
Model Learning with Local Gaussian Process Regression
TLDR
A local approximation to the standard GPR, called local GPR (LGP), is proposed for real-time model online learning by combining the strengths of both regression methods, i.e., the high accuracy of GPR and the fast speed of LWPR.
...
...