• Corpus ID: 225067653

Scalable Gaussian Process Variational Autoencoders

@inproceedings{Jazbec2021ScalableGP,
  title={Scalable Gaussian Process Variational Autoencoders},
  author={Metod Jazbec and Vincent Fortuin and Michael Pearce and Stephan Mandt and Gunnar R{\"a}tsch},
  booktitle={AISTATS},
  year={2021}
}
Large, multi-dimensional spatio-temporal datasets are omnipresent in modern science and engineering. An effective framework for handling such data are Gaussian process deep generative models (GP-DGMs), which employ GP priors over the latent variables of DGMs. Existing approaches for performing inference in GP-DGMs do not support sparse GP approximations based on inducing points, which are essential for the computational efficiency of GPs, nor do they handle missing data -- a natural occurrence… 
Ancestral protein sequence reconstruction using a tree-structured Ornstein-Uhlenbeck variational autoencoder
TLDR
A deep generative model for representation learning of biological sequences that, unlike existing models, explicitly represents the evolutionary process and makes use of a tree-structured Ornstein-Uhlenbeck process as an informative prior for a variational autoencoder.
Meta-learning richer priors for VAEs
TLDR
This work proposes a novel flexible prior, namely the Pseudo-inputs prior, and shows that this MAML-VAE model learns richer latent representations, which is evaluated in terms of unsupervised few-shot classification as a downstream task.
Priors in Bayesian Deep Learning: A Review
TLDR
An overview of different priors that have been proposed for (deep) Gaussian processes, variational autoencoders, and Bayesian neural networks is presented and different methods of learning priors for these models from data are outlined.
Gaussian Process Encoders: VAEs with Reliable Latent-Space Uncertainty
TLDR
This work introduces a sparse Gaussian process encoder toVariational autoencoders and demonstrates how the Gaussian Process encoder generates reliable uncertainty estimates while maintaining good likelihood estimates on a range of anomaly detection problems.
MGP-AttTCN: An interpretable machine learning model for the prediction of sepsis
TLDR
This work proposes MGP-AttTCN: a joint multitask Gaussian Process and attention-based deep learning model to early predict the occurrence of sepsis in an interpretable manner and shows that the model outperforms the current state-of-the-art and presents evidence that different labelling heuristics lead to discrepancies in task difficulty.
Multi-resolution deconvolution of spatial transcriptomics data reveals continuous patterns of inflammation
TLDR
Deconvolution of Spatial Transcriptomics profiles using Variational Inference (DestVI), a probabilistic method for multi-resolution analysis for spatial transcriptomics that explicitly models continuous variation within cell types, is developed.
On Disentanglement in Gaussian Process Variational Autoencoders
TLDR
This work investigates the disentanglement properties of Gaussian process variational autoencoders, a class of models recently introduced that have been successful in different tasks on time series data, and exploits the temporal structure of the data by modeling each latent channel with a GP prior and employing a structured variational distribution that can capture dependencies in time.
Factorized Gaussian Process Variational Autoencoders
TLDR
This work proposes a more scalable extension of Gaussian process variational autoencoders by leveraging the independence of the auxiliary features, which is present in many datasets and factorizes the latent kernel across these features in different dimensions, leading to a significant speed-up.

References

SHOWING 1-10 OF 67 REFERENCES
Gaussian Process Prior Variational Autoencoders
TLDR
A new model is introduced, the Gaussian Process (GP) Prior Variational Autoencoder (GPPVAE), which aims to combine the power of VAEs with the ability to model correlations afforded by GP priors, and leverages structure in the covariance matrix to achieve efficient inference in this new class of models.
2018) a CV search on the noise parameter σ2
  • 2018
μT , due to the product of two plug-in estimators that both depend on the data in the same batch
  • 2018
μb is approximately unbiased for μT , due to the product of two plug-in estimators that both depend on the data in the same batch
  • 2018
Gaussian Processes for Big Data
TLDR
Stochastic variational inference for Gaussian process models is introduced and it is shown how GPs can be variationally decomposed to depend on a set of globally relevant inducing variables which factorize the model in the necessary manner to perform Variational inference.
Variational Learning of Inducing Variables in Sparse Gaussian Processes
TLDR
A variational formulation for sparse approximations that jointly infers the inducing inputs and the kernel hyperparameters by maximizing a lower bound of the true log marginal likelihood.
Visualizing Data using t-SNE
TLDR
A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data
TLDR
A new underlying probabilistic model for principal component analysis (PCA) is introduced that shows that if the prior's covariance function constrains the mappings to be linear the model is equivalent to PCA, and is extended by considering less restrictive covariance functions which allow non-linear mappings.
The Gaussian Process Autoregressive Regression Model (GPAR)
TLDR
GPAR is presented, a scalable multi-output GP model that is able to capture nonlinear, possibly input-varying, dependencies between outputs in a simple and tractable way and outperforming existing GP models and achieving state-of-the-art performance on established benchmarks.
Scalable gaussian processes on discrete domains
  • arXiv preprint arXiv:1810.10368,
  • 2018
...
1
2
3
4
5
...