• Corpus ID: 14992224

Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data

@article{Karl2017DeepVB,
  title={Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data},
  author={Maximilian Karl and Maximilian S{\"o}lch and Justin Bayer and Patrick van der Smagt},
  journal={ArXiv},
  year={2017},
  volume={abs/1605.06432}
}
We introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning of latent Markovian state space models. [] Key Result This also enables realistic long-term prediction.

Figures from this paper

Linear Variational State Space Filtering
TLDR
L-VSSF is introduced, a new method for unsupervised learning, identification, and filtering of latent Markov state space models from raw pixels with an explicit instantiation of this model with linear latent dynamics and Gaussian distribution parameterizations.
Recursive Variational Bayesian Dual Estimation for Nonlinear Dynamics and Non-Gaussian Observations
TLDR
This work developed a flexible online learning framework for latent nonlinear state dynamics and filtered latent states using the stochastic gradient variational Bayes method to jointly optimize the parameters of the nonlinear dynamics, observation model, and the recognition model.
Latent Matters: Learning Deep State-Space Models
TLDR
The extended Kalman VAE (EKVAE) is introduced, which combines amortised variational inference with classic Bayesian filtering/smoothing to model dynamics more accurately than RNN-based DSSMs.
Physics-guided Deep Markov Models for Learning Nonlinear Dynamical Systems with Uncertainty
Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces
TLDR
This work proposes a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations and uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions.
The neural moving average model for scalable variational inference of state space models
TLDR
This work proposes an extension to state space models of time series data based on a novel generative model for latent temporal states: the neural moving average model, which permits a subsequence to be sampled without drawing from the entire distribution, enabling training iterations to use mini-batches of the time series at low computational cost.
Self-Supervised Hybrid Inference in State-Space Models
TLDR
Despite the model’s simplicity, it obtains competitive results on the chaotic Lorenz system compared to a fully supervised approach and outperform a method based on variational inference.
Recurrent Neural Filters: Learning Independent Bayesian Filtering Steps for Time Series Prediction
TLDR
The Recurrent Neural Filter (RNF), a novel recurrent autoencoder architecture that learns distinct representations for each Bayesian filtering step, captured by a series of encoders and decoders is introduced.
Self-Supervised Inference in State-Space Models
TLDR
This work performs approximate inference in state-space models with nonlinear state transitions using a local linearity approximation parameterized by neural networks, accompanied by a maximum likelihood objective that requires no supervision via uncorrupt observations or ground truth latent states.
Variational Structured Stochastic Network
TLDR
Variational Structured Stochastic Network (VSSN), a new method for modeling high dimensional structured data that can overcome intractable inference distributions via stochastic variational inference, is introduced.
...
...

References

SHOWING 1-10 OF 29 REFERENCES
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Stochastic Backpropagation and Approximate Inference in Deep Generative Models
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and
Learning Stochastic Recurrent Networks
TLDR
The proposed model is a generalisation of deterministic recurrent neural networks with latent variables, resulting in Stochastic Recurrent Networks (STORNs), and is evaluated on four polyphonic musical data sets and motion capture data.
Structured VAEs: Composing Probabilistic Graphical Models and Variational Autoencoders
TLDR
A new framework for unsupervised learning is developed that composes probabilistic graphical models with deep learning methods and combines their respective strengths to learn flexible feature models and bottom-up recognition networks.
An Unsupervised Ensemble Learning Method for Nonlinear Dynamic State-Space Models
TLDR
Experiments with chaotic data show that the new Bayesian ensemble learning method is able to blindly estimate the factors and the dynamic process that generated the data and clearly outperforms currently available nonlinear prediction techniques in this very difficult test problem.
Deep Kalman Filters
TLDR
A unified algorithm is introduced to efficiently learn a broad spectrum of Kalman filters and investigates the efficacy of temporal generative models for counterfactual inference, and introduces the "Healing MNIST" dataset where long-term structure, noise and actions are applied to sequences of digits.
Composing graphical models with neural networks for structured representations and fast inference
TLDR
A general modeling and inference framework that composes probabilistic graphical models with deep learning methods and combines their respective strengths is proposed, giving a scalable algorithm that leverages stochastic variational inference, natural gradients, graphical model message passing, and the reparameterization trick.
Variational Learning for Switching State-Space Models
TLDR
A new statistical model for time series that iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes is introduced and the results suggest that variational approximations are a viable method for inference and learning in switching state-space models.
Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images
TLDR
Embed to Control is introduced, a method for model learning and control of non-linear dynamical systems from raw pixel images that is derived directly from an optimal control formulation in latent space and exhibits strong performance on a variety of complex control problems.
Learning Multilevel Distributed Representations for High-Dimensional Sequences
TLDR
A new family of non-linear sequence models that are substantially more powerful than hidden Markov models or linear dynamical systems are described, and their performance is demonstrated using synthetic video sequences of two balls bouncing in a box.
...
...