On-Line Learning of Linear Dynamical Systems: Exponential Forgetting in Kalman Filters

@article{Kozdoba2019OnLineLO,
title={On-Line Learning of Linear Dynamical Systems: Exponential Forgetting in Kalman Filters},
author={Mark Kozdoba and Jakub Marecek and Tigran T. Tchrakian and Shie Mannor},
journal={ArXiv},
year={2019},
volume={abs/1809.05870}
}
• Published 16 September 2018
• Computer Science
• ArXiv
The Kalman filter is a key tool for time-series forecasting and analysis. [] Key MethodBased on this insight, we devise an on-line algorithm for improper learning of a linear dynamical system (LDS), which considers only a few most recent observations. We use our decay results to provide the first regret bounds w.r.t. to Kalman filters within learning an LDS. That is, we compare the results of our algorithm to the best, in hindsight, Kalman filter for a given signal. Also, the algorithm is practical: its per…

Figures from this paper

Robust guarantees for learning an autoregressive filter
• Computer Science
ALT
• 2020
This work takes the approach of directly learning an autoregressive filter for time-series prediction under unknown dynamics, and uses an $L^\infty$-based objective rather than ordinary least-squares for the filter under worst case input.
SLIP: Learning to Predict in Unknown Dynamical Systems with Long-Term Memory
• Computer Science
NeurIPS
• 2020
An efficient and practical (polynomial time) algorithm for online prediction in unknown and partially observed linear dynamical systems (LDS) under stochastic noise and relies on Mendelson's small-ball method for regret analysis, providing sharp error bounds without concentration, boundedness, or exponential forgetting assumptions.
Online Learning of the Kalman Filter with Logarithmic Regret
• Computer Science, Mathematics
ArXiv
• 2020
This work is the first to provide logarithmic regret guarantees for the widely used Kalman filter, and shows that it is possible to achieve a regret of the order of $\mathrm{poly}\log(N)$ with high probability, where $N$ is the number of observations collected.
Sample Complexity of Kalman Filtering for Unknown Systems
• Computer Science, Mathematics
L4DC
• 2020
It is shown that when the system identification step produces sufficiently accurate estimates, or when the underlying true KF is sufficiently robust, that a Certainty Equivalent (CE) KF, i.e., one designed using the estimated parameters directly, enjoys provable sub-optimality guarantees.
Finite Sample Analysis of Stochastic System Identification
• Computer Science, Mathematics
2019 IEEE 58th Conference on Decision and Control (CDC)
• 2019
This analysis uses recent results from random matrix theory, self-normalized martingales and SVD robustness, in order to show that with high probability the estimation errors decrease with a rate of $1/\sqrt N$ up to logarithmic terms.
Variance Estimation For Online Regression via Spectrum Thresholding
• Mathematics, Computer Science
ArXiv
• 2019
The global system operator is studied: the operator that maps the noise vectors to the output, and as a result the first known variance estimators with finite sample complexity guarantees are derived.
Improved rates for identification of partially observed linear dynamical systems
This work develops the first algorithm that given a single trajectory of length $T$ with gaussian observation noise, achieves a near-optimal rate of $\widetilde O\left(\sqrt\frac{d}{T}\right)$ in $\mathcal{H}_2$ error for the learned system.
Learning Linear Models Using Distributed Iterative Hessian Sketching
• Computer Science, Mathematics
L4DC
• 2022
It is shown that a randomized and distributed Newton algorithm based on Hessian-sketching can produce optimal solutions and converges geometrically and is trivially parallelizable.
Online prediction of time series with assumed behavior
• Computer Science
Eng. Appl. Artif. Intell.
• 2020
Linear Systems can be Hard to Learn
• Mathematics, Computer Science
2021 60th IEEE Conference on Decision and Control (CDC)
• 2021
This paper analyzes when system identification is statistically easy or hard, in the finite sample regime, and shows that the sample complexity of robustly controllable linear systems is upper bounded by an exponential function of the controllability index.

References

SHOWING 1-10 OF 21 REFERENCES
Kalman Filtering with Real-time Applications
• Mathematics
• 1987
Kalman Filtering with Real-Time Applications presents a thorough discussion of the mathematical theory and computational schemes of Kalman filtering, including a direct method consisting of a series of elementary steps, and an indirect method based on innovation projection.
Exponential smoothing: The state of the art
A critical review of exponential smoothing since the original work by Brown and Holt in the 1950s is shared, which concludes that the parameter ranges and starting values typically used in practice are arbitrary and may detract from accuracy.
Online ARIMA Algorithms for Time Series Prediction
• Computer Science
AAAI
• 2016
This paper proposes online learning algorithms for estimating ARIMA models under relaxed assumptions on the noise terms, which is suitable to a wider range of applications and enjoys high computational efficiency.
A Unifying Review of Linear Gaussian Models
• Computer Science
Neural Computation
• 1999
A new model for static data is introduced, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise, which shows how independent component analysis is also a variation of the same basic generative model.
Online Instrumental Variable Regression with Applications to Online Linear System Identification
• Computer Science
AAAI
• 2016
This work develops Online Instrumental Variable Regression (OIVR), an algorithm that is capable of updating the learned estimator with streaming data and demonstrates the efficacy of the algorithm in combination with popular no-regret online algorithms for the task of learning predictive dynamical system models and on a prototypical econometrics instrumental variable regression problem.
Online Learning of Linear Dynamical Systems
• Computer Science
NIPS 2017
• 2017
This work presents an efficient and practical algorithm for the online prediction of discrete-time linear dynamical systems that has near-optimal regret bounds compared to the best LDS in hindsight, while overparameterizing by only a small logarithmic factor.
Online Learning for Time Series Prediction
• Computer Science
COLT
• 2013
This work develops eective online learning algorithms for the prediction problem of predicting a time series using the ARMA (autoregressive moving average) model, without assuming that the noise terms are Gaussian, identically distributed or even independent.
Bayesian Forecasting And Dynamic Models
The bayesian forecasting and dynamic models is universally compatible with any devices to read and is available in the book collection an online access to it is set as public so you can get it instantly.
Subspace Identification for Linear Systems: Theory ― Implementation ― Applications
• Computer Science
• 2011
This book focuses on the theory, implementation and applications of subspace identification algorithms for linear time-invariant finitedimensional dynamical systems, which allow for a fast, straightforward and accurate determination of linear multivariable models from measured inputoutput data.