Inter-domain Deep Gaussian Processes
@inproceedings{Rudner2020InterdomainDG, title={Inter-domain Deep Gaussian Processes}, author={Tim G. J. Rudner}, booktitle={ICML}, year={2020} }
Inter-domain Gaussian processes (GPs) allow for high flexibility and low computational cost when performing approximate inference in GP models. They are particularly suitable for modeling data exhibiting global structure but are limited to stationary covariance functions and thus fail to model non-stationary data effectively. We propose Inter-domain Deep Gaussian Processes, an extension of inter-domain shallow GPs that combines the advantages of inter-domain and deep Gaussian processes (DGPs…
Figures from this paper
7 Citations
Compositional uncertainty in deep Gaussian processes
- Computer ScienceUAI
- 2020
It is argued that such an inference scheme is suboptimal, not taking advantage of the potential of the model to discover the compositional structure in the data, and examines alternative variational inference schemes allowing for dependencies across different layers.
Deconditional Downscaling with Gaussian Processes
- Computer ScienceNeurIPS
- 2021
This work introduces conditional mean process (CMP), a new class of Gaussian Processes describing conditional means, and demonstrates its proficiency in a synthetic and a real-world atmospheric field downscaling problem, showing substantial improvements over existing methods.
Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning
- Computer ScienceEntropy
- 2021
This work proposes the conditional deep Gaussian process (DGP) in which the intermediate GPs in hierarchical composition are supported by the hyperdata and the exposed GP remains zero mean, and follows the previous moment matching approach to approximate the marginal prior for conditional DGP with a GP carrying an effective kernel.
Deep Neural Networks as Point Estimates for Deep Gaussian Processes
- Computer ScienceNeurIPS
- 2021
This work establishes an equivalence between the forward passes of neural networks and (deep) sparse Gaussian process models, based on interpreting activation functions as interdomain inducing features through a rigorous analysis of the interplay between activation functions and kernels.
Priors in Bayesian Deep Learning: A Review
- Computer ScienceInternational Statistical Review
- 2022
An overview of different priors that have been proposed for (deep) Gaussian processes, variational autoencoders, and Bayesian neural networks is presented and different methods of learning priors for these models from data are outlined.
References
SHOWING 1-10 OF 30 REFERENCES
Inter-domain Gaussian Processes for Sparse Inference using Inducing Features
- Computer ScienceNIPS
- 2009
A general inference framework for inter-domain Gaussian Processes (GPs) is presented and it is shown how previously existing models fit into this framework and will be used to develop two new sparse GP models.
Doubly Stochastic Variational Inference for Deep Gaussian Processes
- Computer ScienceNIPS
- 2017
This work presents a doubly stochastic variational inference algorithm, which does not force independence between layers in Deep Gaussian processes, and provides strong empirical evidence that the inference scheme for DGPs works well in practice in both classification and regression.
Deep Gaussian Processes with Importance-Weighted Variational Inference
- Computer ScienceICML
- 2019
This work proposes a novel importance-weighted objective, which leverages analytic results and provides a mechanism to trade off computation for improved accuracy and demonstrates that the importance- Weighted objective works well in practice and consistently outperforms classical variational inference, especially for deeper models.
Random Feature Expansions for Deep Gaussian Processes
- Computer ScienceICML
- 2017
A novel formulation of DGPs based on random feature expansions that is trained using stochastic variational inference and yields a practical learning framework which significantly advances the state-of-the-art in inference for DGPs, and enables accurate quantification of uncertainty.
Inference in Deep Gaussian Processes using Stochastic Gradient Hamiltonian Monte Carlo
- Computer ScienceNeurIPS
- 2018
This work provides evidence for the non-Gaussian nature of the posterior and applies the Stochastic Gradient Hamiltonian Monte Carlo method to generate samples, which results in significantly better predictions at a lower computational cost than its VI counterpart.
Deep Gaussian Processes
- Computer ScienceAISTATS
- 2013
Deep Gaussian process (GP) models are introduced and model selection by the variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.
Variational Auto-encoded Deep Gaussian Processes
- Computer ScienceICLR
- 2016
A new formulation of the variational lower bound is derived that allows for most of the computation to be distributed in a way that enables to handle datasets of the size of mainstream deep learning tasks.
Deep Gaussian processes and variational propagation of uncertainty
- Computer Science
- 2015
The results show that the developed variational methodologies improve practical applicability by enabling automatic capacity control in the models, even when data are scarce.
Deep Gaussian Processes for Regression using Approximate Expectation Propagation
- Computer ScienceICML
- 2016
A new approximate Bayesian learning scheme is developed that enables DGPs to be applied to a range of medium to large scale regression problems for the first time and is almost always better than state-of-the-art deterministic and sampling-based approximate inference methods for Bayesian neural networks.
Sparse Gaussian Processes using Pseudo-inputs
- Computer ScienceNIPS
- 2005
It is shown that this new Gaussian process (GP) regression model can match full GP performance with small M, i.e. very sparse solutions, and it significantly outperforms other approaches in this regime.