• Corpus ID: 231632506

Mind the Gap when Conditioning Amortised Inference in Sequential Latent-Variable Models

@article{Bayer2021MindTG,
  title={Mind the Gap when Conditioning Amortised Inference in Sequential Latent-Variable Models},
  author={Justin Bayer and Maximilian Soelch and Atanas Mirchev and Baris Kayalibay and Patrick van der Smagt},
  journal={ArXiv},
  year={2021},
  volume={abs/2101.07046}
}
Amortised inference enables scalable learning of sequential latent-variable models (LVMs) with the evidence lower bound (ELBO). In this setting, variational posteriors are often only partially conditioned. While the true posteriors depend, e.g., on the entire sequence of observations, approximate posteriors are only informed by past observations. This mimics the Bayesian filter—a mixture of smoothing posteriors. Yet, we show that the ELBO objective forces partially–conditioned amortised… 

Figures and Tables from this paper

Benchmarking Generative Latent Variable Models for Speech
TLDR
A speech benchmark of popular temporal LVMs is developed and it is found that the Clockwork VAE can outperform previous LVMs and reduce the gap to deterministic models by using a hierarchy of latent variables.
Cross Reconstruction Transformer for Self-Supervised Time Series Representation Learning
—Unsupervised/self-supervised representation learn- ing in time series is critical since labeled samples are usually scarce in real-world scenarios. Existing approaches mainly lever- age the
Iterative Bilinear Temporal-Spectral Fusion for Unsupervised Time-Series Representation Learning
TLDR
This paper proposes a unified framework, namely Bilinear Temporal-Spectral Fusion (BTSF), which firstly utilizes the instance-level augmentation with a simple dropout on the entire time series for maximally capturing long-term dependencies and devise a novel iterative bilinear temporal-spectral fusion to explicitly encode the affinities of abundant time-frequency pairs.
Linear Variational State Space Filtering
TLDR
L-VSSF is introduced, a new method for unsupervised learning, identification, and filtering of latent Markov state space models from raw pixels with an explicit instantiation of this model with linear latent dynamics and Gaussian distribution parameterizations.
Unsupervised Time-Series Representation Learning with Iterative Bilinear Temporal-Spectral Fusion
TLDR
This paper proposes a unified framework, namely Bilinear Temporal-Spectral Fusion (BTSF), which firstly utilizes the instance-level augmentation with a simple dropout on the entire time series for maximally capturing long-term dependencies and devise a novel iterative bilinear temporal-spectral fusion to explicitly encode the affinities of abundant time-frequency pairs.
Latent Matters: Learning Deep State-Space Models
TLDR
The extended Kalman VAE (EKVAE) is introduced, which combines amortised variational inference with classic Bayesian filtering/smoothing to model dynamics more accurately than RNN-based DSSMs.

References

SHOWING 1-10 OF 61 REFERENCES
BLACK BOX VARIATIONAL INFERENCE FOR STATE SPACE MODELS
TLDR
A structured Gaussian variational approximate posterior is proposed that carries the same intuition as the standard Kalman filter-smoother but permits us to use the same inference approach to approximate the posterior of much more general, nonlinear latent variable generative models.
Inference Suboptimality in Variational Autoencoders
TLDR
It is found that divergence from the true posterior is often due to imperfect recognition networks, rather than the limited complexity of the approximating distribution, and the parameters used to increase the expressiveness of the approximation play a role in generalizing inference.
Variational Inference for Monte Carlo Objectives
TLDR
The first unbiased gradient estimator designed for importance-sampled objectives is developed, which is both simpler and more effective than the NVIL estimator proposed for the single-sample variational objective, and is competitive with the currently used biases.
Importance Weighted Autoencoders
TLDR
The importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting, shows empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log- likelihood on density estimation benchmarks.
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Two problems with variational expectation maximisation for time-series models
Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in
Debiasing Evidence Approximations: On Importance-weighted Autoencoders and Jackknife Variational Inference
TLDR
Jackknife variational inference (JVI) is developed, a family of bias-reduced estimators reducing the bias to $O(K^{-(m+1)})$ for any given m in the importance-weighted autoencoder bounds.
Filtering Variational Objectives
TLDR
A family of lower bounds defined by a particle filter's estimator of the marginal likelihood, the filtering variational objectives (FIVOs), are considered, which take the same arguments as the ELBO, but can exploit a model's sequential structure to form tighter bounds.
Variational Tracking and Prediction with Generative Disentangled State-Space Models
TLDR
This work empirically shows that the Markovian state-space assumption enables faithful and much improved long-term prediction well beyond the training horizon and correctly decomposes frames into objects, even in the presence of occlusions.
Re-examination of the Role of Latent Variables in Sequence Modeling
TLDR
Over a diverse set of sequential data, including human speech, MIDI music, handwriting trajectory and frame-permuted speech, the results show that stochastic recurrent models fail to exhibit any practical advantage despite the claimed theoretical superiority.
...
1
2
3
4
5
...