• Corpus ID: 233210565

GPflux: A Library for Deep Gaussian Processes

  title={GPflux: A Library for Deep Gaussian Processes},
  author={Vincent Dutordoir and Hugh Salimbeni and Eric Hambro and John Mcleod and Felix Leibfried and Artem Artemev and Mark van der Wilk and James Hensman and Marc Peter Deisenroth and S. T. John},
We introduce GPflux, a Python library for Bayesian deep learning with a strong emphasis on deep Gaussian processes (DGPs). Implementing DGPs is a challenging endeavour due to the various mathematical subtleties that arise when dealing with multivariate Gaussian distributions and the complex bookkeeping of indices. To date, there are no actively maintained, open-sourced and extendable libraries available that support research activities in this area. GPflux aims to fill this gap by providing a… 

Tables from this paper

Deep Gaussian Process Emulation using Stochastic Imputation

This work proposes a novel inference method for DGPs for computer model emulation that transforms a DGP into a linked GP: a novel emulator developed for systems of linked computer models.

Trieste: Efficiently Exploring The Depths of Black-box Functions with TensorFlow

The Trieste library enables the plug-and-play of popular TensorFlow-based models within sequential decision-making loops, e.g. Gaussian processes from GPflow or GPflux, or neural networks from Keras.

Gaussian Processes for One-class and Binary Classification of Crisis-related Tweets

The potential of deep kernel models for the task of classifying crisis-related tweet texts with special emphasis on cross-event applications is investigated, offering a fast and flexible approach for interactive model training without requiring off-topic training samples and comprehensive expert knowledge.

Predicting plant Rubisco kinetics from RbcL sequence data using machine learning

Abstract Ribulose-1,5-bisphosphate carboxylase/oxygenase (Rubisco) is responsible for the conversion of atmospheric CO2 to organic carbon during photosynthesis, and often acts as a rate limiting step

Vecchia-approximated Deep Gaussian Processes for Computer Experiments

This work aims to bridge the gap by expanding the capabilities of Bayesian DGP posterior inference through the incorporation of the Vecchia approximation, allowing linear computational scaling without compromising accuracy or UQ.

Deep Gaussian Processes for Calibration of Computer Models

This work proposes a novel calibration framework that is easy to implement in development environments featuring automatic dif-ferentiation and exploiting GPU-type hardware and yields a powerful alternative to the state-of-the-art by means of experimental validations on various calibration problems.

Priors in Bayesian Deep Learning: A Review

An overview of different priors that have been proposed for (deep) Gaussian processes, variational autoencoders and Bayesian neural networks is presented and different methods of learning priors for these models from data are outlined.

Bayesian Quantile and Expectile Optimisation

New variational models for Bayesian quantile and expectile regression that are well-suited for heteroscedastic settings are proposed and clearly outperforms the state of the art.

Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning

This work proposes the conditional deep Gaussian process (DGP) in which the intermediate GPs in hierarchical composition are supported by the hyperdata and the exposed GP remains zero mean, and follows the previous moment matching approach to approximate the marginal prior for conditional DGP with a GP carrying an effective kernel.

The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective

This paper analyzes two existing classes of models: Deep GP and neural networks, focusing on how width affects performance metrics and offering useful guidance for DeepGP and neural network architectures.



TensorFlow Distributions

The TensorFlow Distributions library implements a vision of probability theory adapted to the modern deeplearning paradigm of end-to-end differentiable computation and enables modular construction of high dimensional distributions and transformations not possible with previous libraries.

A Tutorial on Sparse Gaussian Processes and Variational Inference

This tutorial is to provide access to the basic matter for readers without prior knowledge in both GPs and VI, where pseudo-training examples are treated as optimization arguments of the approximate posterior that are jointly identified together with hyperparameters of the generative model.

On the Expressiveness of Approximate Inference in Bayesian Neural Networks

It is found empirically that pathologies of a similar form as in the single-hidden layer case can persist when performing variational inference in deeper networks, and a universality result is proved showing that there exist approximate posteriors in the above classes which provide flexible uncertainty estimates.

A Framework for Interdomain and Multioutput Gaussian Processes

This work presents a mathematical and software framework for scalable approximate inference in GPs, which combines interdomain approximations and multiple outputs, and provides a unified interface for many existing multioutput models, as well as more recent convolutional structures.

Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model

The MuZero algorithm is presented, which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics.

Deep Gaussian Processes with Importance-Weighted Variational Inference

This work proposes a novel importance-weighted objective, which leverages analytic results and provides a mechanism to trade off computation for improved accuracy and demonstrates that the importance- Weighted objective works well in practice and consistently outperforms classical variational inference, especially for deeper models.

Bayesian Image Classification with Deep Convolutional Gaussian Processes

This work proposes a translation-insensitive convolutional kernel, which relaxes the translation invariance constraint imposed by previous convolutionAL GPs, and shows how the marginal likelihood can be used to learn the degree of insensitivity.

Gaussian Process Conditional Density Estimation

This work proposes to extend the model's input with latent variables and use Gaussian processes to map this augmented input onto samples from the conditional distribution, and illustrates the effectiveness and wide-reaching applicability of the model on a variety of real-world problems.

Deep convolutional Gaussian processes

A principled Bayesian framework for detecting hierarchical combinations of local features for image classi€cation with greatly improved performance compared to current Gaussian process approaches on the MNIST and CIFAR-10 datasets is proposed.

Inference in Deep Gaussian Processes using Stochastic Gradient Hamiltonian Monte Carlo

This work provides evidence for the non-Gaussian nature of the posterior and applies the Stochastic Gradient Hamiltonian Monte Carlo method to generate samples, which results in significantly better predictions at a lower computational cost than its VI counterpart.