Learning GP-BayesFilters via Gaussian process latent variable models

Abstract

GP-BayesFilters are a general framework for integrating Gaussian process prediction and observation models into Bayesian filtering techniques, including particle filters and extended and unscented Kalman filters. GPBayesFilters have been shown to be extremely well suited for systems for which accurate parametric models are difficult to obtain. GP-BayesFilters learn non-parametric models from training data containing sequences of control inputs, observations, and ground truth states. The need for ground truth states limits the applicability of GP-BayesFilters to systems for which the ground truth can be estimated without significant overhead. In this paper we introduce GPBFLEARN, a framework for training GP-BayesFilters without ground truth states. Our approach extends Gaussian Process Latent Variable Models to the setting of dynamical robotics systems. We show how weak labels for the ground truth states can be incorporated into the GPBF-LEARN framework. The approach is evaluated using a difficult tracking task, namely tracking a slotcar based on inertial measurement unit (IMU) observations only. We also show some special features enabled by this framework, including time alignment, and control replay for both the slotcar, and a robotic arm.

DOI: 10.1007/s10514-010-9213-0

Extracted Key Phrases

22 Figures and Tables

0510152008200920102011201220132014201520162017
Citations per Year

59 Citations

Semantic Scholar estimates that this publication has 59 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Ko2009LearningGV, title={Learning GP-BayesFilters via Gaussian process latent variable models}, author={Jonathan Ko and Dieter Fox}, journal={Auton. Robots}, year={2009}, volume={30}, pages={3-23} }