Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data
@article{Karl2017DeepVB, title={Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data}, author={Maximilian Karl and Maximilian S{\"o}lch and Justin Bayer and Patrick van der Smagt}, journal={ArXiv}, year={2017}, volume={abs/1605.06432} }
We introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning of latent Markovian state space models. [] Key Result This also enables realistic long-term prediction.
258 Citations
Linear Variational State Space Filtering
- Computer ScienceArXiv
- 2022
L-VSSF is introduced, a new method for unsupervised learning, identification, and filtering of latent Markov state space models from raw pixels with an explicit instantiation of this model with linear latent dynamics and Gaussian distribution parameterizations.
Recursive Variational Bayesian Dual Estimation for Nonlinear Dynamics and Non-Gaussian Observations
- Computer Science
- 2017
This work developed a flexible online learning framework for latent nonlinear state dynamics and filtered latent states using the stochastic gradient variational Bayes method to jointly optimize the parameters of the nonlinear dynamics, observation model, and the recognition model.
Latent Matters: Learning Deep State-Space Models
- Computer ScienceNeurIPS
- 2021
The extended Kalman VAE (EKVAE) is introduced, which combines amortised variational inference with classic Bayesian filtering/smoothing to model dynamics more accurately than RNN-based DSSMs.
Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces
- Computer ScienceICML
- 2019
This work proposes a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations and uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions.
Physics-guided Deep Markov Models for Learning Nonlinear Dynamical Systems with Uncertainty
- Computer ScienceMechanical Systems and Signal Processing
- 2022
The neural moving average model for scalable variational inference of state space models
- Computer ScienceUAI
- 2021
This work proposes an extension to state space models of time series data based on a novel generative model for latent temporal states: the neural moving average model, which permits a subsequence to be sampled without drawing from the entire distribution, enabling training iterations to use mini-batches of the time series at low computational cost.
Self-Supervised Hybrid Inference in State-Space Models
- Computer ScienceArXiv
- 2021
Despite the model’s simplicity, it obtains competitive results on the chaotic Lorenz system compared to a fully supervised approach and outperform a method based on variational inference.
Recurrent Neural Filters: Learning Independent Bayesian Filtering Steps for Time Series Prediction
- Computer Science2020 International Joint Conference on Neural Networks (IJCNN)
- 2020
The Recurrent Neural Filter (RNF), a novel recurrent autoencoder architecture that learns distinct representations for each Bayesian filtering step, captured by a series of encoders and decoders is introduced.
Variational Structured Stochastic Network
- Computer Science
- 2017
Variational Structured Stochastic Network (VSSN), a new method for modeling high dimensional structured data that can overcome intractable inference distributions via stochastic variational inference, is introduced.
Iterative Inference Models
- Computer Science
- 2017
This work proposes iterative inference models, which learn how to optimize a variational lower bound through repeatedly encoding gradients, and demonstrates the inference optimization capabilities of these models and shows that they outperform standard inference models on typical benchmark data sets.
References
SHOWING 1-10 OF 29 REFERENCES
Auto-Encoding Variational Bayes
- Computer ScienceICLR
- 2014
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Stochastic Backpropagation and Approximate Inference in Deep Generative Models
- Computer ScienceICML
- 2014
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and…
Learning Stochastic Recurrent Networks
- Computer ScienceNIPS 2014
- 2014
The proposed model is a generalisation of deterministic recurrent neural networks with latent variables, resulting in Stochastic Recurrent Networks (STORNs), and is evaluated on four polyphonic musical data sets and motion capture data.
Structured VAEs: Composing Probabilistic Graphical Models and Variational Autoencoders
- Computer Science
- 2016
A new framework for unsupervised learning is developed that composes probabilistic graphical models with deep learning methods and combines their respective strengths to learn flexible feature models and bottom-up recognition networks.
An Unsupervised Ensemble Learning Method for Nonlinear Dynamic State-Space Models
- Computer Science, EngineeringNeural Computation
- 2002
Experiments with chaotic data show that the new Bayesian ensemble learning method is able to blindly estimate the factors and the dynamic process that generated the data and clearly outperforms currently available nonlinear prediction techniques in this very difficult test problem.
Deep Kalman Filters
- Computer ScienceArXiv
- 2015
A unified algorithm is introduced to efficiently learn a broad spectrum of Kalman filters and investigates the efficacy of temporal generative models for counterfactual inference, and introduces the "Healing MNIST" dataset where long-term structure, noise and actions are applied to sequences of digits.
Composing graphical models with neural networks for structured representations and fast inference
- Computer ScienceNIPS
- 2016
A general modeling and inference framework that composes probabilistic graphical models with deep learning methods and combines their respective strengths is proposed, giving a scalable algorithm that leverages stochastic variational inference, natural gradients, graphical model message passing, and the reparameterization trick.
Variational Learning for Switching State-Space Models
- Computer ScienceNeural Computation
- 2000
A new statistical model for time series that iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes is introduced and the results suggest that variational approximations are a viable method for inference and learning in switching state-space models.
Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images
- Computer Science, MathematicsNIPS
- 2015
Embed to Control is introduced, a method for model learning and control of non-linear dynamical systems from raw pixel images that is derived directly from an optimal control formulation in latent space and exhibits strong performance on a variety of complex control problems.
Learning Multilevel Distributed Representations for High-Dimensional Sequences
- Computer ScienceAISTATS
- 2007
A new family of non-linear sequence models that are substantially more powerful than hidden Markov models or linear dynamical systems are described, and their performance is demonstrated using synthetic video sequences of two balls bouncing in a box.