• Corpus ID: 86625937

Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces

@article{Becker2019RecurrentKN,
  title={Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces},
  author={Philipp Becker and Harit Pandya and Gregor H. W. Gebhardt and Cheng Zhao and C. James Taylor and Gerhard Neumann},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.07357}
}
In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) have been integrated with deep learning models; however, such approaches typically rely on approximate inference techniques such as variational inference which makes learning more complex and often less scalable due to approximation errors. [] Key Method Our approach uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to…

Figures and Tables from this paper

Self-Supervised Hybrid Inference in State-Space Models
TLDR
Despite the model’s simplicity, it obtains competitive results on the chaotic Lorenz system compared to a fully supervised approach and outperform a method based on variational inference.
Unsupervised Learned Kalman Filtering
TLDR
It is numerically demonstrate that when the noise statistics are unknown, unsupervised KalmanNet achieves a similar performance to Kalman net with supervised learning.
Uncertainty in Data-Driven Kalman Filtering for Partially Known State-Space Models
TLDR
It is demonstrated that when the system dynamics are known, KalmanNet—which learns its mapping from data without access to the statistics—provides uncertainty similar to that provided by the KF; and while in the presence of evolution model-mismatch,KalmanNet provides a more accurate error estimation.
Action-Conditional Recurrent Kalman Networks For Forward and Inverse Dynamics Learning
TLDR
Two architectures are presented, one for forward model learning and one for inverse model learning, which significantly outperform existing model learning frameworks as well as analytical models in terms of prediction performance on a variety of real robot dynamics models.
Switching Recurrent Kalman Networks
TLDR
This work proposes the Switching Recurrent Kalman Network (SRKN), a scalable and interpretable deep state-space model that switches among several Kalman filters that model different aspects of the dynamics in a factorized latent state on nonlinear and multimodal time-series data.
Deep Switching Auto-Regressive Factorization: Application to Time Series Forecasting
We introduce deep switching auto-regressive factorization (DSARF), a deep generative model for spatio-temporal data with the capability to unravel recurring patterns in the data and perform robust
Deep Measurement Updates for Bayes Filters
TLDR
This work proposes the novel approach Deep Measurement Update (DMU) as a general update rule for a wide range of systems and demonstrates how the DMU models can be trained efficiently to be sensitive to condition variables without having to rely on a stochastic information bottleneck.
A Dynamic Stream Weight Backprop Kalman Filter for Audiovisual Speaker Tracking
TLDR
A deep neural-network-based implementation of the Kalman filter with dynamic stream weights, whose parameters can be learned via standard backpropagation, which shows comparable performance to state-of-the-art recurrent neural networks with the additional advantage of requiring a smaller number of parameters and providing explicit uncertainty information.
Deep Variational Luenberger-type Observer for Stochastic Video Prediction
TLDR
This work builds upon an variational encoder which transforms the input video into a latent feature space and a Luenberger-type observer which captures the dynamic evolution of the latent features, which enables the decomposition of videos into static features and dynamics in an unsupervised manner.
HIDDEN PARAMETER RECURRENT STATE SPACE MODELS FOR CHANGING DYNAMICS SCENARIOS
TLDR
This work introduces the Hidden Parameter Recurrent State Space Models (HiP-RSSMs), a framework that parametrizes a family of related dynamical systems with a low-dimensional set of latent factors that outperforms RSSMs and competing multi-task models on several challenging robotic benchmarks both on real-world systems and simulations.
...
1
2
3
4
...

References

SHOWING 1-10 OF 25 REFERENCES
BLACK BOX VARIATIONAL INFERENCE FOR STATE SPACE MODELS
TLDR
A structured Gaussian variational approximate posterior is proposed that carries the same intuition as the standard Kalman filter-smoother but permits us to use the same inference approach to approximate the posterior of much more general, nonlinear latent variable generative models.
Structured Inference Networks for Nonlinear State Space Models
TLDR
A unified algorithm is introduced to efficiently learn a broad class of linear and non-linear state space models, including variants where the emission and transition distributions are modeled by deep neural networks.
Backprop KF: Learning Discriminative Deterministic State Estimators
TLDR
This work presents an alternative approach where the parameters of the latent state distribution are directly optimized as a deterministic computation graph, resulting in a simple and effective gradient descent algorithm for training discriminative state estimators.
From Pixels to Torques: Policy Learning with Deep Dynamical Models
TLDR
This paper introduces a data-efficient, model-based reinforcement learning algorithm that learns a closed-loop control policy from pixel information only, and facilitates fully autonomous learning from pixels to torques.
Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data
TLDR
Deep Variational Bayes Filters is introduced, a new method for unsupervised learning and identification of latent Markovian state space models that can overcome intractable inference distributions via variational inference and enables realistic long-term prediction.
The Kernel Kalman Rule - Efficient Nonparametric Inference with Recursive Least Squares
TLDR
The kernel Kalman rule (KKR) is presented as an alternative to the KBR and it is shown on a nonlinear state estimation task with high dimensional observations that the approach provides a significantly improved estimation accuracy while the computational demands are significantly decreased.
Probabilistic Recurrent State-Space Models
TLDR
This work proposes a novel model formulation and a scalable training algorithm based on doubly stochastic variational inference and Gaussian processes that allows one to fully capture the latent state temporal correlations in state-space models.
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Deep State Space Models for Time Series Forecasting
TLDR
A novel approach to probabilistic time series forecasting that combines state space models with deep learning by parametrizing a per-time-series linear state space model with a jointly-learned recurrent neural network, which compares favorably to the state-of-the-art.
Long Short-Term Memory
TLDR
A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
...
1
2
3
...