• Corpus ID: 17072912

Variational Inference for Uncertainty on the Inputs of Gaussian Process Models

@article{Damianou2014VariationalIF,
  title={Variational Inference for Uncertainty on the Inputs of Gaussian Process Models},
  author={Andreas C. Damianou and Michalis K. Titsias and Neil D. Lawrence},
  journal={ArXiv},
  year={2014},
  volume={abs/1409.2287}
}
The Gaussian process latent variable model (GP-LVM) provides a flexible approach for non-linear dimensionality reduction that has been widely applied. However, the current approach for training GP-LVMs is based on maximum likelihood, where the latent projection variables are maximized over rather than integrated out. In this paper we present a Bayesian method for training GP-LVMs by introducing a non-standard variational inference framework that allows to approximately integrate out the latent… 
Dynamical Gaussian Process Latent Variable Model for Representation Learning from Longitudinal Data
TLDR
This work describes an effective approach to learning the parameters of L-GPLVM from sparse observations, by coupling the dynamical model with a Multitask Gaussian Process model for sampling of the missing observations at each step of the gradient-based optimization of the variational lower bound.
Mitigating the Effects of Non-Identifiability on Inference for Bayesian Neural Networks with Latent Variables
TLDR
A novel inference procedure is developed that explicitly mitigates the effects of likelihood non-identifiability during training and yields high-quality predictions as well as uncertainty estimates and improves upon benchmark methods across a range of synthetic and real data-sets.
On the use of bootstrap with variational inference: Theory, interpretation, and a two-sample test example
Variational inference is a general approach for approximating complex density functions, such as those arising in latent variable models, popular in machine learning. It has been applied to
Learning Deep Bayesian Latent Variable Regression Models that Generalize: When Non-identifiability is a Problem
TLDR
This work develops a novel inference procedure that explicitly mitigates the effects of likelihood non-identifiability during training and yields high quality predictions as well as uncertainty estimates, and improves upon benchmark methods across a range of synthetic and real datasets.
Variational Dependent Multi-output Gaussian Process Dynamical Systems
TLDR
The proposed model has superiority on modeling dynamical systems under the more reasonable assumption and the fully Bayesian learning framework and can be flexibly extended to handle regression problems.
The Dynamical Gaussian Process Latent Variable Model in the Longitudinal Scenario
TLDR
This study approaches the inference of Gaussian Process Dynamical Systems in Longitudinal scenario by augmenting the bound in the variational approximation to include systematic samples of the unseen observations and demonstrates the usefulness of this approach on synthetic as well as the human motion capture data set.
THE DYNAMICAL GAUSSIAN PROCESS LATENT VARI-
TLDR
This study approaches the inference of Gaussian Process Dynamical Systems in Longitudinal scenario by augmenting the bound in the variational approximation to include systematic samples of the unseen observations, and demonstrates the usefulness of this approach on synthetic as well as the human motion capture data set.
On Sparse Variational Methods and the Kullback-Leibler Divergence between Stochastic Processes
TLDR
A substantial generalization of the literature on variational framework for learning inducing variables is given and a new proof of the result for infinite index sets is given which allows inducing points that are not data points and likelihoods that depend on all function values.
A review on Gaussian Process Latent Variable Models
...
...

References

SHOWING 1-10 OF 73 REFERENCES
Distributed Variational Inference in Sparse Gaussian Process Regression and Latent Variable Models
TLDR
A novel re-parametrisation of variational inference for sparse GP regression and latent variable models that allows for an efficient distributed algorithm and shows that GPs perform better than many common models often used for big data.
Deep Gaussian Processes
TLDR
Deep Gaussian process (GP) models are introduced and model selection by the variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.
Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models
TLDR
A novel probabilistic interpretation of principal component analysis (PCA) that is based on a Gaussian process latent variable model (GP-LVM), and related to popular spectral techniques such as kernel PCA and multidimensional scaling.
Sparse Gaussian Processes using Pseudo-inputs
TLDR
It is shown that this new Gaussian process (GP) regression model can match full GP performance with small M, i.e. very sparse solutions, and it significantly outperforms other approaches in this regime.
Gaussian processes:iterative sparse approximations
TLDR
This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior of Gaussian processes, and combines the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution.
Manifold Relevance Determination
TLDR
This paper presents a fully Bayesian latent variable model which exploits conditional nonlinear (in)-dependence structures to learn an efficient latent representation and introduces a relaxation to the discrete segmentation and allow for a "softly" shared latent space.
Gaussian Process Dynamical Models
TLDR
This paper marginalize out the model parameters in closed-form, using Gaussian Process (GP) priors for both the dynamics and the observation mappings, resulting in a nonparametric model for dynamical systems that accounts for uncertainty in the model.
Gaussian Process Training with Input Noise
TLDR
This work presents a simple yet effective GP model for training on input points corrupted by i.i.d. Gaussian noise, and compares it to others over a range of different regression problems and shows that it improves over current methods.
Variational Learning of Inducing Variables in Sparse Gaussian Processes
TLDR
A variational formulation for sparse approximations that jointly infers the inducing inputs and the kernel hyperparameters by maximizing a lower bound of the true log marginal likelihood.
Fast Variational Inference for Gaussian Process Models Through KL-Correction
TLDR
A recently suggested variational approach for approximate inference in Gaussian process (GP) models is reviewed and how convergence may be dramatically improved through the use of a positive correction term to the standard variational bound is reviewed.
...
...