Corpus ID: 195791552

Unscented Gaussian Process Latent Variable Model: learning from uncertain inputs with intractable kernels

@article{Souza2019UnscentedGP,
  title={Unscented Gaussian Process Latent Variable Model: learning from uncertain inputs with intractable kernels},
  author={Daniel Augusto R. M. A. de Souza and C{\'e}sar Lincoln C. Mattos and Jo{\~a}o Paulo Pordeus Gomes},
  journal={ArXiv},
  year={2019},
  volume={abs/1907.01867}
}
The Gaussian Process (GP) framework flexibility has enabled its use in several data modeling scenarios. The setting where we have unavailable or uncertain inputs that generate possibly noisy observations is usually tackled by the well known Gaussian Process Latent Variable Model (GPLVM). However, the standard variational approach to perform inference with the GPLVM presents some expressions that are tractable for only a few kernel functions, which may hinder its general application. While other… Expand

References

SHOWING 1-10 OF 46 REFERENCES
Variational Inference for Latent Variables and Uncertain Inputs in Gaussian Processes
TLDR
A Bayesian method for training GP-LVMs by introducing a non-standard variational inference framework that allows to approximately integrate out the latent variables and subsequently train a GP-LVM by maximising an analytic lower bound on the exact marginal likelihood. Expand
Semi-described and semi-supervised learning with Gaussian processes
TLDR
This paper develops variational methods for handling semi-described inputs in GPs, and couple them with algorithms that allow for imputing the missing values while treating the uncertainty in a principled, Bayesian manner. Expand
Bayesian Gaussian Process Latent Variable Model
TLDR
A variational inference framework for training the Gaussian process latent variable model and thus performing Bayesian nonlinear dimensionality reduction and the maximization of the variational lower bound provides a Bayesian training procedure that is robust to overfitting and can automatically select the dimensionality of the nonlinear latent space. Expand
Doubly Stochastic Variational Inference for Deep Gaussian Processes
TLDR
This work presents a doubly stochastic variational inference algorithm, which does not force independence between layers in Deep Gaussian processes, and provides strong empirical evidence that the inference scheme for DGPs works well in practice in both classification and regression. Expand
Gaussian Process Prior Variational Autoencoders
TLDR
A new model is introduced, the Gaussian Process (GP) Prior Variational Autoencoder (GPPVAE), which aims to combine the power of VAEs with the ability to model correlations afforded by GP priors, and leverages structure in the covariance matrix to achieve efficient inference in this new class of models. Expand
Identification of Gaussian Process State Space Models
TLDR
A structured Gaussian variational posterior distribution over the latent states is imposed, which is parameterised by a recognition model in the form of a bi-directional recurrent neural network, which allows for the use of arbitrary kernels within the GPSSM. Expand
Manifold Gaussian Processes for regression
TLDR
Manifold Gaussian Processes is a novel supervised method that jointly learns a transformation of the data into a feature space and a GP regression from the feature space to observed space, which allows to learn data representations, which are useful for the overall regression task. Expand
Inference in Deep Gaussian Processes using Stochastic Gradient Hamiltonian Monte Carlo
TLDR
This work provides evidence for the non-Gaussian nature of the posterior and applies the Stochastic Gradient Hamiltonian Monte Carlo method to generate samples, which results in significantly better predictions at a lower computational cost than its VI counterpart. Expand
Deep Gaussian Processes
TLDR
Deep Gaussian process (GP) models are introduced and model selection by the variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples. Expand
Learning GP-BayesFilters via Gaussian process latent variable models
TLDR
GPBF-Learn is introduced, a framework for training GP-BayesFilters without ground truth states that extends Gaussian Process Latent Variable Models to the setting of dynamical robotics systems and shows how weak labels for the groundtruth states can be incorporated into the GPBF- learn framework. Expand
...
1
2
3
4
5
...