• Corpus ID: 226237260

Transforming Gaussian Processes With Normalizing Flows

@article{Maroas2021TransformingGP,
  title={Transforming Gaussian Processes With Normalizing Flows},
  author={Juan Maro{\~n}as and Oliver Hamelijnck and Jeremias Knoblauch and Theodoros Damoulas},
  journal={ArXiv},
  year={2021},
  volume={abs/2011.01596}
}
Gaussian Processes (GPs) can be used as flexible, non-parametric function priors. Inspired by the growing body of work on Normalizing Flows, we enlarge this class of priors through a parametric invertible transformation that can be made input-dependent. Doing so also allows us to encode interpretable prior knowledge (e.g., boundedness constraints). We derive a variational approximation to the resulting Bayesian inference problem, which is as fast as stochastic variational GP regression (Hensman… 
Statistical Deep Learning for Spatial and Spatio-Temporal Data
TLDR
An overview of traditional statistical and machine learning perspectives for modeling spatial and spatio-temporal data is presented, and a variety of hybrid models that have recently been developed for latent process, data, and parameter specifica-tions are focused on.
Efficient Transformed Gaussian Processes for Non-Stationary Dependent Multi-class Classification
TLDR
The results show that ETGP s, in general, outperform state-of-the-art methods for multi-class classification based on GP s, and have a lower computational cost (around one order of magnitude smaller) than SOTA methods.
TreeFlow: Going beyond Tree-based Gaussian Probabilistic Regression
TLDR
TreeFlow is introduced, the tree-based approach that combines the benefits of using tree ensembles with capabilities of modeling flexible probability distributions using normalizing flows and is capable of modeling complex distributions for the regression outputs.
Variational Elliptical Processes
  • Computer Science
  • 2022
TLDR
Elliptical processes—a family of non-parametric probabilistic models that subsumes the Gaussian processes and the Student’s t processes—is presented, which includes a range of new heavy-tailed behaviors while retaining computational tractability.
Sample-Efficient Optimisation with Probabilistic Transformer Surrogates
TLDR
This paper investigates the feasibility of employing state-of-the-art probabilistic transformers in Bayesian Optimisation and introduces a BO-tailored training prior supporting non-uniformly distributed points and a novel approximate posterior regulariser trading-off accuracy and input sensitivity to favourable stationary points for improved predictive performance.
Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks
TLDR
It is experimentally show that the key to good MC-approximated predictive distributions is the quality of the approximate posterior itself, and it is shown that the resulting posterior approximation is competitive with even the gold-standard full-batch Hamiltonian Monte Carlo.
F-EBM: Energy Based Learning of Functional Data
TLDR
This work presents a novel class of EBM which is able to learn distributions of functions from functional samples evaluated at finitely many points, and has the ability to utilize irregularly sampled training data and output predictions at any resolution.
AdaAnn: Adaptive Annealing Scheduler for Probability Density Approximation
TLDR
AdaAnn is introduced, an adaptive annealing scheduler that automatically adjusts the temperature increments based on the expected change in the Kullback-Leibler divergence between two distributions with a sufficiently close annealed temperature.
HEBO: An Empirical Study of Assumptions in Bayesian Optimisation
TLDR
The findings indicate that the majority of hyper-parameter tuning tasks exhibit heteroscedasticity and non-stationarity, multiobjective acquisition ensembles with Pareto front solutions improve queried configurations, and robust acquisition maximisers afford empirical advantages relative to their non-robust counterparts.
Priors in Bayesian Deep Learning: A Review
TLDR
An overview of different priors that have been proposed for (deep) Gaussian processes, variational autoencoders, and Bayesian neural networks is presented and different methods of learning priors for these models from data are outlined.
...
...

References

SHOWING 1-10 OF 67 REFERENCES
Transport Gaussian Processes for Regression
TLDR
This work proposes a methodology to construct stochastic processes, which include GPs, warpedGPs, Student-t processes and several others under a single unified approach, and provides formulas and algorithms for training and inference of the proposed models in the regression problem.
Compositional uncertainty in deep Gaussian processes
TLDR
It is argued that such an inference scheme is suboptimal, not taking advantage of the potential of the model to discover the compositional structure in the data, and examines alternative variational inference schemes allowing for dependencies across different layers.
Inter-domain Gaussian Processes for Sparse Inference using Inducing Features
TLDR
A general inference framework for inter-domain Gaussian Processes (GPs) is presented and it is shown how previously existing models fit into this framework and will be used to develop two new sparse GP models.
Gaussian Process Conditional Density Estimation
TLDR
This work proposes to extend the model's input with latent variables and use Gaussian processes to map this augmented input onto samples from the conditional distribution, and illustrates the effectiveness and wide-reaching applicability of the model on a variety of real-world problems.
Neural Processes
TLDR
This work introduces a class of neural latent variable models which it calls Neural Processes (NPs), combining the best of both worlds: probabilistic, data-efficient and flexible, however they are also computationally intensive and thus limited in their applicability.
Compositionally-Warped Gaussian Processes
Posterior inference for sparse hierarchical non-stationary models
Doubly Stochastic Variational Inference for Deep Gaussian Processes
TLDR
This work presents a doubly stochastic variational inference algorithm, which does not force independence between layers in Deep Gaussian processes, and provides strong empirical evidence that the inference scheme for DGPs works well in practice in both classification and regression.
Manifold Gaussian Processes for regression
TLDR
Manifold Gaussian Processes is a novel supervised method that jointly learns a transformation of the data into a feature space and a GP regression from the feature space to observed space, which allows to learn data representations, which are useful for the overall regression task.
Transformation and Additivity in Gaussian Processes
TLDR
This work argues that transformation of the response can be used for making the deterministic function approximately additive, which can then be easily estimated using an additive GP, and proposes an extension of the TAG process called transformed approximately additive Gaussian (TAAG) process.
...
...