Autonomous Tracking and State Estimation With Generalized Group Lasso.

@article{Gao2021AutonomousTA,
  title={Autonomous Tracking and State Estimation With Generalized Group Lasso.},
  author={Rui Gao and Simo Sarkka and Rub'en Claveria-Vega and Simon J. Godsill},
  journal={IEEE transactions on cybernetics},
  year={2021},
  volume={PP}
}
We address the problem of autonomous tracking and state estimation for marine vessels, autonomous vehicles, and other dynamic signals under a (structured) sparsity assumption. The aim is to improve the tracking and estimation accuracy with respect to the classical Bayesian filters and smoothers. We formulate the estimation problem as a dynamic generalized group Lasso problem and develop a class of smoothing-and-splitting methods to solve it. The Levenberg-Marquardt iterated extended Kalman… 

Figures and Tables from this paper

SSGCNet: A Sparse Spectra Graph Convolutional Network for Epileptic EEG Signal Classification
TLDR
This article proposes a weighted neighborhood field graph (WNFG) to represent EEG signals, which reduces the redundant edges between graph nodes, which has lower time complexity and memory usage than the conventional solutions.

References

SHOWING 1-10 OF 46 REFERENCES
Iterated Extended Kalman Smoother-Based Variable Splitting for $L_1$-Regularized State Estimation
TLDR
This paper first formulate such problems as minimization of the sum of linear or nonlinear quadratic error terms and an extra regularizer, and then present novel algorithms which solve the linear and nonlinear cases.
Regularized State Estimation And Parameter Learning Via Augmented Lagrangian Kalman Smoother Method
TLDR
A new augmented Lagrangian Kalman smoother method is developed for solving the problem of estimating the state and learning of the parameters in a linear dynamic system with generalized L_{1} -$regularization, where the primal variable update is reformulated asKalman smoother.
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
TLDR
The proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems and to design first-order prediction steps that rely on the Hessian of the cost function (and do not require the computation of its inverse).
Weighted Optimization-Based Distributed Kalman Filter for Nonlinear Target Tracking in Collaborative Sensor Networks
TLDR
A weighted optimization-based distributed KF algorithm (WODKF) that enlarges the data size of each sensor by the received measurements and state estimates from its connected sensors instead of the time window, and the sensor selection method is in addition to the algorithm to decrease the computation load of the filter and increase the scalability of the sensor network.
Dynamic Filtering of Time-Varying Sparse Signals via $\ell _1$ Minimization
TLDR
Two algorithms for dynamic filtering of sparse signals that are based on efficient ℓ1 optimization methods that represent the first strong performance analysis of dynamic filtering algorithms for time-varying sparse signals as well as state-of-the-art performance in this emerging application.
Doubly Robust Smoothing of Dynamical Processes via Outlier Sparsity Constraints
TLDR
Novel fixed-lag and fixed-interval smoothing algorithms that are robust to outliers simultaneously present in the measurements and in the state dynamics and which rely on coordinate descent and the alternating direction method of multipliers, are developed.
Bayesian Filtering and Smoothing
  • S. Särkkä
  • Computer Science
    Institute of Mathematical Statistics textbooks
  • 2013
TLDR
This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework and learns what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages.
Bayesian Filtering and Smoothing
TLDR
This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework, learning what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
TLDR
It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
...
...