• Corpus ID: 239016616

Unsupervised Learned Kalman Filtering

  title={Unsupervised Learned Kalman Filtering},
  author={Guy Revach and Nir Shlezinger and Timur Locher and Xiaoyong Ni and Ruud van Sloun and Yonina C. Eldar},
In this paper we adapt KalmanNet, which is a recently proposed deep neural network (DNN)-aided system whose architecture follows the operation of the model-based Kalman filter (KF), to learn its mapping in an unsupervised manner, i.e., without requiring ground-truth states. The unsupervised adaptation is achieved by exploiting the hybrid model-based/data-driven architecture of KalmanNet, which internally predicts the next observation as the KF does. These internal features are then used to… 

Figures and Tables from this paper


Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces
This work proposes a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations and uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions.
KalmanNet: Neural Network Aided Kalman Filtering for Partially Known Dynamics
It is numerically demonstrate that KalmanNet overcomes nonlinearities and model mismatch, outperforming classic filtering methods operating with both mismatched and accurate domain knowledge.
Backprop KF: Learning Discriminative Deterministic State Estimators
This work presents an alternative approach where the parameters of the latent state distribution are directly optimized as a deterministic computation graph, resulting in a simple and effective gradient descent algorithm for training discriminative state estimators.
Discriminative Training of Kalman Filters
This paper proposes a method for automatically learning the noise parameters of a Kalman filter and demonstrates on a commercial wheeled rover that the learned noise covariance parameters significantly outperform an earlier, carefully and laboriously hand-designed one.
Long Short-Term Memory Kalman Filters: Recurrent Neural Estimators for Pose Regularization
This work proposes to learn rich, dynamic representations of the motion and noise models from data using long shortterm memory, which allows representations that depend on all previous observations and all previous states.
EKFNet: Learning System Noise Statistics from Measurement Data
  • Liang Xu, R. Niu
  • Computer Science
    ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2021
A new learning framework, EKFNet, is proposed for automatically estimating the best process and measurement noise covariance pair from the real measurement data, trained by using backpropagation through time (BPTT).
Deep Kalman Filters
A unified algorithm is introduced to efficiently learn a broad spectrum of Kalman filters and investigates the efficacy of temporal generative models for counterfactual inference, and introduces the "Healing MNIST" dataset where long-term structure, noise and actions are applied to sequences of digits.
Latent variable time-series models are among the most heavily used tools from machine learning and applied statistics. These models have the advantage of learning latent structure both from noisy
KFNet: Learning Temporal Camera Relocalization Using Kalman Filtering
  • Lei Zhou, Zixin Luo, +5 authors Long Quan
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
KFNet extends the scene coordinate regression problem to the time domain in order to recursively establish 2D and 3D correspondences for the pose determination in temporal relocalization using a network architecture based on Kalman filtering in the context of Bayesian learning.
Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data
Deep Variational Bayes Filters is introduced, a new method for unsupervised learning and identification of latent Markovian state space models that can overcome intractable inference distributions via variational inference and enables realistic long-term prediction.