• Corpus ID: 218862916

Beyond the Mean-Field: Structured Deep Gaussian Processes Improve the Predictive Uncertainties

@article{Lindinger2020BeyondTM,
  title={Beyond the Mean-Field: Structured Deep Gaussian Processes Improve the Predictive Uncertainties},
  author={Jakob Lindinger and David Reeb and C. Lippert and Barbara Rakitsch},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.11110}
}
Deep Gaussian Processes learn probabilistic data representations for supervised learning by cascading multiple Gaussian Processes. While this model family promises flexible predictive distributions, exact inference is not tractable. Approximate inference techniques trade off the ability to closely resemble the posterior distribution against speed of convergence and computational efficiency. We propose a novel Gaussian variational family that allows for retaining covariances between latent… 

Figures and Tables from this paper

Traversing Time with Multi-Resolution Gaussian Process State-Space Models

TLDR
A novel Gaussian process state-space architecture composed of multiple components, each trained on a different resolution, to model effects on different timescales is proposed, providingcient inference for arbitrarily long sequences with complex dynamics.

A Deterministic Approximation to Neural SDEs.

TLDR
This work reports the empirical finding that obtaining well-calibrated uncertainty estimations from NSDEs is computationally prohibitive, and develops a computationally affordable deterministic scheme which accurately approximates the transition kernel, when dynamics is governed by a NSDE.

Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes

TLDR
This work derives the optimal approximate posterior over the top-layer weights in a Bayesian neural network for regression, and shows that it exhibits strong dependencies on the lower- layer weights, and extends this approach to deep Gaussian processes, unifying inference in the two model classes.

References

SHOWING 1-10 OF 37 REFERENCES

Deep Gaussian Processes with Importance-Weighted Variational Inference

TLDR
This work proposes a novel importance-weighted objective, which leverages analytic results and provides a mechanism to trade off computation for improved accuracy and demonstrates that the importance- Weighted objective works well in practice and consistently outperforms classical variational inference, especially for deeper models.

Compositional uncertainty in deep Gaussian processes

TLDR
It is argued that such an inference scheme is suboptimal, not taking advantage of the potential of the model to discover the compositional structure in the data, and examines alternative variational inference schemes allowing for dependencies across different layers.

Inference in Deep Gaussian Processes using Stochastic Gradient Hamiltonian Monte Carlo

TLDR
This work provides evidence for the non-Gaussian nature of the posterior and applies the Stochastic Gradient Hamiltonian Monte Carlo method to generate samples, which results in significantly better predictions at a lower computational cost than its VI counterpart.

Doubly Stochastic Variational Inference for Deep Gaussian Processes

TLDR
This work presents a doubly stochastic variational inference algorithm, which does not force independence between layers in Deep Gaussian processes, and provides strong empirical evidence that the inference scheme for DGPs works well in practice in both classification and regression.

Sparse Gaussian Processes using Pseudo-inputs

TLDR
It is shown that this new Gaussian process (GP) regression model can match full GP performance with small M, i.e. very sparse solutions, and it significantly outperforms other approaches in this regime.

Deep Gaussian Processes for Regression using Approximate Expectation Propagation

TLDR
A new approximate Bayesian learning scheme is developed that enables DGPs to be applied to a range of medium to large scale regression problems for the first time and is almost always better than state-of-the-art deterministic and sampling-based approximate inference methods for Bayesian neural networks.

Implicit Posterior Variational Inference for Deep Gaussian Processes

TLDR
This paper presents an implicit posterior variational inference (IPVI) framework for DGPs that can ideally recover an unbiased posterior belief and still preserve time efficiency, and inspires us to devise a best-response dynamics algorithm to search for a Nash equilibrium.

Learning Gaussian Processes by Minimizing PAC-Bayesian Generalization Bounds

TLDR
This work proposes a method to learn Gaussian Processes and their sparse approximations by directly optimizing a PAC-Bayesian bound on their generalization performance, instead of maximizing the marginal likelihood.

Variational Dropout and the Local Reparameterization Trick

TLDR
The Variational dropout method is proposed, a generalization of Gaussian dropout, but with a more flexibly parameterized posterior, often leading to better generalization in stochastic gradient variational Bayes.

Deep Gaussian Processes

TLDR
Deep Gaussian process (GP) models are introduced and model selection by the variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.