# Deep Kalman Filters

@article{Krishnan2015DeepKF, title={Deep Kalman Filters}, author={Rahul G. Krishnan and Uri Shalit and David A. Sontag}, journal={ArXiv}, year={2015}, volume={abs/1511.05121} }

Kalman Filters are one of the most influential models of time-varying phenomena. They admit an intuitive probabilistic interpretation, have a simple functional form, and enjoy widespread adoption in a variety of disciplines. Motivated by recent variational methods for learning deep generative models, we introduce a unified algorithm to efficiently learn a broad spectrum of Kalman filters. Of particular interest is the use of temporal generative models for counterfactual inference. We…

## 237 Citations

BLACK BOX VARIATIONAL INFERENCE FOR STATE SPACE MODELS

- Computer Science
- 2016

A structured Gaussian variational approximate posterior is proposed that carries the same intuition as the standard Kalman filter-smoother but permits us to use the same inference approach to approximate the posterior of much more general, nonlinear latent variable generative models.

Structured Inference Networks for Nonlinear State Space Models

- Computer ScienceAAAI
- 2017

A unified algorithm is introduced to efficiently learn a broad class of linear and non-linear state space models, including variants where the emission and transition distributions are modeled by deep neural networks.

Related Work Our model can be viewed as a Deep Kalman Filter

- Computer Science
- 2019

Leveraging Bayesian inference, Variational Autoencoders and Concrete relaxations, it is shown how to learn a richer and more meaningful state space, e.g. encoding joint constraints and collisions with walls in a maze, from partial and highdimensional observations.

Self-Supervised Hybrid Inference in State-Space Models

- Computer ScienceArXiv
- 2021

Despite the model’s simplicity, it obtains competitive results on the chaotic Lorenz system compared to a fully supervised approach and outperform a method based on variational inference.

Long Short-Term Memory Kalman Filters: Recurrent Neural Estimators for Pose Regularization

- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017

This work proposes to learn rich, dynamic representations of the motion and noise models from data using long shortterm memory, which allows representations that depend on all previous observations and all previous states.

A Novel Variational Family for Hidden Nonlinear Markov Models

- Computer ScienceArXiv
- 2018

A novel variational inference framework for the explicit modeling of time series, Variational Inference for Nonlinear Dynamics (VIND), is proposed that is able to uncover nonlinear observation and transition functions from sequential data.

Estimating Nonlinear Dynamics with the ConvNet Smoother

- Computer Science
- 2017

A new dynamical smoothing method that exploits the remarkable capabilities of convolutional neural networks to approximate complex non-linear functions and can be applied in situations in which either the latent dynamical model or the observation model cannot be easily expressed in closed form.

Inference from Stationary Time Sequences via Learned Factor Graphs

- Computer ScienceArXiv
- 2020

An inference algorithm based on learned stationary factor graphs, referred to as StaSPNet, is presented, which learns to implement the sum product scheme from labeled data, and can be applied to sequences of different lengths.

Recurrent Neural Filters: Learning Independent Bayesian Filtering Steps for Time Series Prediction

- Computer Science2020 International Joint Conference on Neural Networks (IJCNN)
- 2020

The Recurrent Neural Filter (RNF), a novel recurrent autoencoder architecture that learns distinct representations for each Bayesian filtering step, captured by a series of encoders and decoders is introduced.

Variational Joint Filtering

- Computer Science
- 2017

This work developed a flexible online learning framework for latent nonlinear state dynamics and filtered latent states using the stochastic gradient variational Bayes approach and jointly optimizes the parameters of the nonlinear dynamical system, the observation model, and the black-box recognition model.

## References

SHOWING 1-10 OF 43 REFERENCES

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

- Computer ScienceICML
- 2014

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and…

Auto-Encoding Variational Bayes

- Computer ScienceICLR
- 2014

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

Variational Inference with Normalizing Flows

- Computer Science, MathematicsICML
- 2015

It is demonstrated that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.

An Unsupervised Ensemble Learning Method for Nonlinear Dynamic State-Space Models

- Computer Science, EngineeringNeural Computation
- 2002

Experiments with chaotic data show that the new Bayesian ensemble learning method is able to blindly estimate the factors and the dynamic process that generated the data and clearly outperforms currently available nonlinear prediction techniques in this very difficult test problem.

Variational Bayesian learning of nonlinear hidden state-space models for model predictive control

- EngineeringNeurocomputing
- 2009

Deep Temporal Sigmoid Belief Networks for Sequence Modeling

- Computer ScienceNIPS
- 2015

Deep dynamic generative models are developed to learn sequential dependencies in time-series data. The multi-layered model is designed by constructing a hierarchy of temporal sigmoid belief networks…

The unscented Kalman filter for nonlinear estimation

- MathematicsProceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373)
- 2000

This paper points out the flaws in using the extended Kalman filter (EKE) and introduces an improvement, the unscented Kalman filter (UKF), proposed by Julier and Uhlman (1997). A central and vital…

Learning Stochastic Recurrent Networks

- Computer ScienceNIPS 2014
- 2014

The proposed model is a generalisation of deterministic recurrent neural networks with latent variables, resulting in Stochastic Recurrent Networks (STORNs), and is evaluated on four polyphonic musical data sets and motion capture data.

Adam: A Method for Stochastic Optimization

- Computer ScienceICLR
- 2015

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

DENSITY ESTIMATION BY DUAL ASCENT OF THE LOG-LIKELIHOOD ∗

- Computer Science
- 2010

A methodology is developed to assign, from an observed sample, a joint-probability distribution to a set of continuous variables, by mapping the original variables onto a jointly-Gaussian set.