Learning Relational Kalman Filtering

@inproceedings{Choi2015LearningRK,
  title={Learning Relational Kalman Filtering},
  author={Jaesik Choi and Eyal Amir and Tianfang Xu and Albert J. Valocchi},
  booktitle={AAAI Conference on Artificial Intelligence},
  year={2015}
}
The Kalman Filter (KF) is pervasively used to control a vast array of consumer, health and defense products. By grouping sets of symmetric state variables, the Relational Kalman Filter (RKF) enables us to scale the exact KF for large-scale dynamic systems. In this paper, we provide a parameter learning algorithm for RKF, and a regrouping algorithm that prevents the degeneration of the relational structure for efficient filtering. The proposed algorithms significantly expand the… 

Figures from this paper

The Automatic Statistician: A Relational Perspective

This work proposes two relational kernel learning methods which can model multiple time-series data sets by finding common, shared causes of changes and shows that the relationalkernel learning methods find more accurate models for regression problems on several real-world data sets.

LiMa: Sequential Lifted Marginal Filtering on Multiset State Descriptions

Lifted Marginal Filtering (LiMa) is inspired by Lifted Inference and combines techniques known from Computational State Space Models and Multiset Rewriting Systems to perform efficient sequential inference on a parametric multiset state description.

Lifted Bayesian Filtering in Multiset Rewriting Systems

A lifted state representation is devised, based on a suitable decomposition of multiset states, such that some factors of the distribution are exchangeable and thus afford an efficient representation that groups together similar entities whose properties follow an exchangeable joint distribution.

Automatic Construction of Nonparametric Relational Regression Models for Multiple Time Series

This work proposes two relational kernel learning methods which can model multiple time-series data sets by finding common, shared causes of changes and shows that the relationalkernel learning methods find more accurate models for regression problems on several real-world data sets.

Lifted Marginal Filtering for Asymmetric Models by Clustering-Based Merging

This paper proposes a method to retain the lifted representation in LiMa, and proposes a novel distance measure for lifted states that does not require to completely ground the distribution first, and shows how such a single representative for a group of lifted states can be computed.

Lifted Message Passing for Hybrid Probabilistic Inference

This work develops approximate lifted inference schemes based on particle sampling that perform comparably to existing state-of-the-art models for Gaussian MLNs, while having the flexibility to be applied to models with arbitrary potential functions.

Human Activity and Context Recognition using Lifted Marginal Filtering

It is shown for the first time the application of LiMa to a complex real-world activity recognition setting based on real IMU data, and it is shown that LiMa needs fewer states to represent the exact filtering distribution, and achieves a higher activity recognition accuracy when only limited resources are available to represents the state distribution.

Lifted Filtering via Exchangeable Decomposition

The core idea is to borrow the concept of Maximally Parallel Multiset Rewriting Systems and to enhance it by concepts from Rao-Blackwellization and Lifted Inference, giving a representation of state distributions that enables efficient inference.

State-Space Abstractions for Probabilistic Inference: A Systematic Review

A systematic literature review is performed to outline the state of the art in probabilistic inference methods exploiting symmetries, and new high-level categories are provided that classify the approaches, based on common properties of the approaches.

Lifted Hybrid Variational Inference

It is demonstrated that the proposed variational methods are highly scalable and can exploit approximate model symmetries even in the presence of a large amount of continuous evidence, outperforming existing message-passing-based approaches in a variety of settings.

References

SHOWING 1-10 OF 24 REFERENCES

Lifted Relational Kalman Filtering

This work proposes Relational Gaussian Models to represent and model dynamic systems with large numbers of variables efficiently and devise an exact lifted Kalman Filtering algorithm which takes only linear time in the number of random variables at every timestep.

Learning Probabilistic Relational Models

This paper describes both parameter estimation and structure learning -- the automatic induction of the dependency structure in a model and shows how the learning procedure can exploit standard database retrieval techniques for efficient learning from large datasets.

Parameter estimation for linear dynamical systems

The Expectation Maximization (EM) algorithm for estimating the parameters of linear systems (LDS) is introduced and its relation to factor analysis and other data modeling techniques is pointed out.

Lifted Relational Variational Inference

An efficient relational variational inference algorithm that factors large-scale probability models into simpler variational models, composed of mixtures of iid (Bernoulli) random variables.

A Unifying Review of Linear Gaussian Models

A new model for static data is introduced, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise, which shows how independent component analysis is also a variation of the same basic generative model.

Lifted Inference for Relational Continuous Models

This paper presents the first exact inference algorithm for RCMs at a lifted level, thus it scales up to large models of real world applications and outperforms both a ground-level inference algorithm and an algorithm built with previously-known lifted methods.

Multi-Evidence Lifted Message Passing, with Application to PageRank and the Kalman Filter

The benefits of this multi-evidence lifted inference are shown for several important AI tasks such as computing personalized PageRanks and Kalman filters via multievidence lifted Gaussian belief propagation.

Hybrid Markov Logic Networks

Experiments in a mobile robot mapping domain--involving joint classification, clustering and regression--illustrate the power of hybrid MLNs as a modeling language, and the accuracy and efficiency of the inference algorithms.

A New Approach to Linear Filtering and Prediction Problems

The clssical filleting and prediclion problem is re-examined using the Bode-Shannon representation of random processes and the ?stat-tran-sition? method of analysis of dynamic systems. New result

Lifted Probabilistic Inference

This paper is intended to give a (not necessarily complete) overview and invitation to the emerging field of lifted probabilistic inference, inference techniques that exploit these symmetries in graphical models in order to speed up inference, ultimately orders of magnitude.