• Publications
  • Influence
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
TLDR
We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Expand
Using the Nyström Method to Speed Up Kernel Machines
TLDR
A major problem for kernel-based predictors (such as Support Vector Machines and Gaussian processes) is that the amount of computation required to find the solution scales as O(n3), where n is the number of training examples. Expand
Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting
TLDR
We analyze an intuitive Gaussian process upper confidence bound algorithm, and bound its cumulative regret in terms of maximal in- formation gain, establishing a novel connection between GP optimization and experimental design. Expand
Fast Sparse Gaussian Process Methods: The Informative Vector Machine
TLDR
We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretic principles, previously suggested for active learning. Expand
Fast Forward Selection to Speed Up Sparse Gaussian Process Regression
TLDR
We present a method for the sparse greedy approximation of Bayesian Gaussian process regression, featuring a novel heuristic for very fast forward selection. Expand
Semiparametric latent factor models
TLDR
We propose a semiparametric model for regression and classification problems involving multiple response variables. Expand
Bayesian Inference and Optimal Design in the Sparse Linear Model
TLDR
The sparse linear model has seen many successful applications in Statistics, Machine Learning, and Computational Biology, such as identification of gene regulatory networks from microarray expression data. Expand
Model Learning with Local Gaussian Process Regression
TLDR
We propose a local approximation to the standard GPR, called local GPR (LGP), for real-time model online learning by combining the strengths of both regression methods, i.e., the high accuracy of GPR and the fast speed of LWPR. Expand
Deep State Space Models for Time Series Forecasting
TLDR
We present a novel approach to probabilistic time series forecasting that combines state space models with deep learning. Expand
Bayesian Gaussian process models : PAC-Bayesian generalisation error bounds and sparse approximations
  • M. Seeger
  • Computer Science, Mathematics
  • 1 July 2003
TLDR
We provide a distribution-free finite sample bound on the difference between generalisation and empirical (training) error for Gaussian process (GP) models. Expand
...
1
2
3
4
5
...