• Corpus ID: 15578083

Deep Kalman Filters

@article{Krishnan2015DeepKF,
  title={Deep Kalman Filters},
  author={Rahul G. Krishnan and Uri Shalit and David A. Sontag},
  journal={ArXiv},
  year={2015},
  volume={abs/1511.05121}
}
Kalman Filters are one of the most influential models of time-varying phenomena. They admit an intuitive probabilistic interpretation, have a simple functional form, and enjoy widespread adoption in a variety of disciplines. Motivated by recent variational methods for learning deep generative models, we introduce a unified algorithm to efficiently learn a broad spectrum of Kalman filters. Of particular interest is the use of temporal generative models for counterfactual inference. We… 

Figures from this paper

BLACK BOX VARIATIONAL INFERENCE FOR STATE SPACE MODELS
TLDR
A structured Gaussian variational approximate posterior is proposed that carries the same intuition as the standard Kalman filter-smoother but permits us to use the same inference approach to approximate the posterior of much more general, nonlinear latent variable generative models.
Structured Inference Networks for Nonlinear State Space Models
TLDR
A unified algorithm is introduced to efficiently learn a broad class of linear and non-linear state space models, including variants where the emission and transition distributions are modeled by deep neural networks.
Related Work Our model can be viewed as a Deep Kalman Filter
TLDR
Leveraging Bayesian inference, Variational Autoencoders and Concrete relaxations, it is shown how to learn a richer and more meaningful state space, e.g. encoding joint constraints and collisions with walls in a maze, from partial and highdimensional observations.
Self-Supervised Hybrid Inference in State-Space Models
TLDR
Despite the model’s simplicity, it obtains competitive results on the chaotic Lorenz system compared to a fully supervised approach and outperform a method based on variational inference.
Long Short-Term Memory Kalman Filters: Recurrent Neural Estimators for Pose Regularization
TLDR
This work proposes to learn rich, dynamic representations of the motion and noise models from data using long shortterm memory, which allows representations that depend on all previous observations and all previous states.
A Novel Variational Family for Hidden Nonlinear Markov Models
TLDR
A novel variational inference framework for the explicit modeling of time series, Variational Inference for Nonlinear Dynamics (VIND), is proposed that is able to uncover nonlinear observation and transition functions from sequential data.
Estimating Nonlinear Dynamics with the ConvNet Smoother
TLDR
A new dynamical smoothing method that exploits the remarkable capabilities of convolutional neural networks to approximate complex non-linear functions and can be applied in situations in which either the latent dynamical model or the observation model cannot be easily expressed in closed form.
Inference from Stationary Time Sequences via Learned Factor Graphs
TLDR
An inference algorithm based on learned stationary factor graphs, referred to as StaSPNet, is presented, which learns to implement the sum product scheme from labeled data, and can be applied to sequences of different lengths.
Recurrent Neural Filters: Learning Independent Bayesian Filtering Steps for Time Series Prediction
TLDR
The Recurrent Neural Filter (RNF), a novel recurrent autoencoder architecture that learns distinct representations for each Bayesian filtering step, captured by a series of encoders and decoders is introduced.
Variational Joint Filtering
TLDR
This work developed a flexible online learning framework for latent nonlinear state dynamics and filtered latent states using the stochastic gradient variational Bayes approach and jointly optimizes the parameters of the nonlinear dynamical system, the observation model, and the black-box recognition model.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 43 REFERENCES
Stochastic Backpropagation and Approximate Inference in Deep Generative Models
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Variational Inference with Normalizing Flows
TLDR
It is demonstrated that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.
An Unsupervised Ensemble Learning Method for Nonlinear Dynamic State-Space Models
TLDR
Experiments with chaotic data show that the new Bayesian ensemble learning method is able to blindly estimate the factors and the dynamic process that generated the data and clearly outperforms currently available nonlinear prediction techniques in this very difficult test problem.
Deep Temporal Sigmoid Belief Networks for Sequence Modeling
Deep dynamic generative models are developed to learn sequential dependencies in time-series data. The multi-layered model is designed by constructing a hierarchy of temporal sigmoid belief networks
The unscented Kalman filter for nonlinear estimation
  • E. Wan, R. Van Der Merwe
  • Mathematics
    Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373)
  • 2000
This paper points out the flaws in using the extended Kalman filter (EKE) and introduces an improvement, the unscented Kalman filter (UKF), proposed by Julier and Uhlman (1997). A central and vital
Learning Stochastic Recurrent Networks
TLDR
The proposed model is a generalisation of deterministic recurrent neural networks with latent variables, resulting in Stochastic Recurrent Networks (STORNs), and is evaluated on four polyphonic musical data sets and motion capture data.
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
DENSITY ESTIMATION BY DUAL ASCENT OF THE LOG-LIKELIHOOD ∗
TLDR
A methodology is developed to assign, from an observed sample, a joint-probability distribution to a set of continuous variables, by mapping the original variables onto a jointly-Gaussian set.
...
1
2
3
4
5
...