• Corpus ID: 394337

Sparse Gaussian Processes using Pseudo-inputs

@inproceedings{Snelson2005SparseGP,
  title={Sparse Gaussian Processes using Pseudo-inputs},
  author={Edward Snelson and Zoubin Ghahramani},
  booktitle={NIPS},
  year={2005}
}
We present a new Gaussian process (GP) regression model whose co-variance is parameterized by the the locations of M pseudo-input points, which we learn by a gradient based optimization. We take M ≪ N, where N is the number of real data points, and hence obtain a sparse regression method which has O(M2N) training cost and O(M2) prediction cost per test case. We also find hyperparameters of the covariance function in the same joint optimization. The method can be viewed as a Bayesian regression… 

Figures from this paper

Input Dependent Sparse Gaussian Processes
TLDR
This work proposes to amortize the computation of the inducing points locations, as well as the parameters of the variational posterior approximation q, which makes the method scale to larger datasets and have faster training and prediction times.
Sparse Gaussian Process Regression via L1 Penalization
TLDR
This paper proposes a novel sparse GP regression approach, GPLasso, that explicitly represents the trade-off between its approximation quality and the model sparsity, and achieves a significantly improved trade-offs between prediction accuracy and computational cost.
Sparse-posterior Gaussian Processes for general likelihoods
TLDR
A new sparse GP framework that uses expectation propagation to directly approximate general GP likelihoods using a sparse and smooth basis is proposed that outperforms previous GP classification methods on benchmark datasets in terms of minimizing divergence to the non-sparse GP solution as well as lower misclassification rate.
Generic Inference in Latent Gaussian Process Models
TLDR
An automated variational method for inference in models with Gaussian process (GP) priors and general likelihoods and is scalable to large datasets by using an augmented prior via the inducing-variable approach underpinning most sparse GP approximations, along with parallel computation and stochastic optimization.
Sparse gaussian processes for large-scale machine learning
TLDR
This thesis presents several novel sparse GP models that compare favorably with SPGP, both in terms of predictive performance and error bar quality, and provides two broad classes of models: Marginalized Networks (MNs) and Inter- Domain GPs (IDGPs).
Local and global sparse Gaussian process approximations
TLDR
This paper develops a new sparse GP approximation which is a combination of both the global and local approaches, and shows that it is derived as a natural extension of the framework developed by Quinonero Candela and Rasmussen for sparse GP approximations.
Variable Noise and Dimensionality Reduction for Sparse Gaussian processes
TLDR
The SPGP is addressed by performing automatic dimensionality reduction - a projection of the input space to a low dimensional space is learned in a supervised manner, alongside the pseudo-inputs, which now live in this reduced space.
Deep Gaussian Processes with Decoupled Inducing Inputs
TLDR
This work shows that the computational cost of deep Gaussian Processes can be reduced with no loss in performance by using a separate, smaller set of pseudo points when calculating the layerwise variance while using a larger set of Pseudo Points when calculatingThe layerwise mean.
Sparse variational inference for generalized Gaussian process models
TLDR
A variational sparse solution for GPs under general likelihoods is developed by providing a new characterization of the gradients required for inference in terms of individual observation likelihood terms and demonstrating experimentally that the fixed point operator acts as a contraction in many cases and therefore leads to fast convergence.
...
...

References

SHOWING 1-10 OF 16 REFERENCES
Fast Sparse Gaussian Process Methods: The Informative Vector Machine
TLDR
A framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretic principles, which allows for Bayesian model selection and is less complex in implementation is presented.
Bayesian Gaussian process models : PAC-Bayesian generalisation error bounds and sparse approximations
TLDR
The tractability and usefulness of simple greedy forward selection with information-theoretic criteria previously used in active learning is demonstrated and generic schemes for automatic model selection with many (hyper)parameters are developed.
Fast Forward Selection to Speed Up Sparse Gaussian Process Regression
TLDR
A method for the sparse greedy approximation of Bayesian Gaussian process regression, featuring a novel heuristic for very fast forward selection, which leads to a sufficiently stable approximation of the log marginal likelihood of the training data, which can be optimised to adjust a large number of hyperparameters automatically.
Gaussian processes:iterative sparse approximations
TLDR
This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior of Gaussian processes, and combines the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution.
Sparse On-Line Gaussian Processes
TLDR
An approach for sparse representations of gaussian process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets is developed based on a combination of a Bayesian on-line algorithm and a sequential construction of a relevant subsample of data that fully specifies the prediction of the GP model.
Sparse Greedy Gaussian Process Regression
TLDR
A simple sparse greedy technique to approximate the maximum a posteriori estimate of Gaussian Processes with much improved scaling behaviour in the sample size m, and shows applications to large scale problems.
Evaluation of gaussian processes and other methods for non-linear regression
TLDR
It is shown that a Bayesian approach to learning in multi-layer perceptron neural networks achieves better performance than the commonly used early stopping procedure, even for reasonably short amounts of computation time.
Discovering Hidden Features with Gaussian Processes Regression
TLDR
This work demonstrates the superiority of predictions using the general matrix over those based on a diagonal matrix on two test problems.
Sparse Bayesian Learning and the Relevance Vector Machine
TLDR
It is demonstrated that by exploiting a probabilistic Bayesian learning framework, the 'relevance vector machine' (RVM) can derive accurate prediction models which typically utilise dramatically fewer basis functions than a comparable SVM while offering a number of additional advantages.
Gaussian Processes for Regression
TLDR
This paper investigates the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations.
...
...