# Pursuit of Low-Rank Models of Time-Varying Matrices Robust to Sparse and Measurement Noise

@article{Akhriev2020PursuitOL,
title={Pursuit of Low-Rank Models of Time-Varying Matrices Robust to Sparse and Measurement Noise},
author={Albert Akhriev and Jakub Marecek and Andrea Simonetto},
journal={ArXiv},
year={2020},
volume={abs/1809.03550}
}
• Published 2020
• Computer Science, Mathematics
• ArXiv
In tracking of time-varying low-rank models of time-varying matrices, we present a method robust to both uniformly-distributed measurement noise and arbitrarily-distributed sparse'' noise. In theory, we bound the tracking error. In practice, our use of randomised coordinate descent is scalable and allows for encouraging results on changedetection net, a benchmark.
5 Citations

#### Figures, Tables, and Topics from this paper

Estimation of Sensitivities: Low-rank Approach and Online Algorithms for Streaming Measurements
• Computer Science
• 2020
An online proximal-gradient method is proposed to estimate sensitivities on-the-fly from real-time measurements and convergence results in terms of dynamic regret are offered in this case. Expand
Matrix Completion Under Interval Uncertainty: Highlights
• Computer Science
• ECML/PKDD
• 2018
An overview of inequality-constrained matrix completion, with a particular focus on alternating least-squares (ALS) methods, and an ALS algorithm MACO by Marecek et al. outperforms others. Expand
Low-Rank Methods in Event Detection and Subsampled Point-to-Subspace Proximity Tests
• Computer Science
• 2018
The proposed algorithm uses a variant of low-rank factorisation, which considers interval uncertainty sets around “known entries”, on a suitable flattening of the input data to obtain a low- rank model and bound the one-sided error as a function of the number of coordinates employed using techniques from learning theory and computational geometry. Expand
Time-Varying Convex Optimization: Time-Structured Algorithms and Applications
• Computer Science, Mathematics
• Proceedings of the IEEE
• 2020
A broad class of state-of-the-art algorithms for time-varying optimization is reviewed, with an eye to performing both algorithmic development and performance analysis, to exemplify wide engineering relevance of analytical tools and pertinent theoretical foundations. Expand
On Sampling Complexity of the Semidefinite Affine Rank Feasibility Problem
• Computer Science
• AAAI
• 2019
An analytical bound on the number of relaxations that are sufficient to solve in order to obtain a solution of a generic instance of the semidefinite affine rank feasibility problem or prove that there is no solution is proposed. Expand

#### References

SHOWING 1-10 OF 94 REFERENCES
Global Optimality of Local Search for Low Rank Matrix Recovery
• Computer Science, Mathematics
• NIPS
• 2016
It is shown that there are no spurious local minima in the non-convex factorized parametrization of low-rank matrix recovery from incoherent linear measurements, which yields a polynomial time global convergence guarantee for stochastic gradient descent. Expand
Convergence of Gradient Descent for Low-Rank Matrix Approximation
• Mathematics, Computer Science
• IEEE Transactions on Information Theory
• 2015
A proof of global convergence of gradient search for low-rank matrix approximation is provided based on the interpretation of the problem as an optimization on the Grassmann manifold and Fubiny-Study distance on this space. Expand
Large-Scale Convex Minimization with a Low-Rank Constraint
• Mathematics, Computer Science
• ICML
• 2011
This work proposes an efficient greedy algorithm which can scale to large matrices arising in several applications such as matrix completion for collaborative filtering and robust low rank matrix approximation. Expand
Recovery of Low-Rank Plus Compressed Sparse Matrices With Application to Unveiling Traffic Anomalies
• Computer Science, Mathematics
• IEEE Transactions on Information Theory
• 2013
First-order algorithms are developed to solve the nonsmooth convex optimization problem with provable iteration complexity guarantees and its ability to outperform existing alternatives is corroborated. Expand
An Online Algorithm for Separating Sparse and Low-Dimensional Signal Sequences From Their Sum
• Computer Science, Mathematics
• IEEE Transactions on Signal Processing
• 2014
This paper designs and extensively evaluates an online algorithm, called practical recursive projected compressive sensing (Prac-ReProCS), for recovering a time sequence of sparse vectors St and aExpand
An Online Parallel and Distributed Algorithm for Recursive Estimation of Sparse Signals
• Yang Yang
• 2015
In this paper, we consider a recursive estimation problem for linear regression where the signal to be estimated admits a sparse representation and measurement samples are only sequentiallyExpand
On a Problem of Weighted Low-Rank Approximation of Matrices
• Aritra Dutta, Xin Li
• Mathematics, Computer Science
• SIAM J. Matrix Anal. Appl.
• 2017
An algorithm based on the alternating direction method is proposed to solve the weighted low rank approximation problem and compare it with the state-of-art general algorithms such as the weighted total alternating least squares and the EM algorithm. Expand
A Batch-Incremental Video Background Estimation Model Using Weighted Low-Rank Approximation of Matrices
• Computer Science, Mathematics
• 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)
• 2017
This work builds a batch-incremental background estimation model by using a special weighted low-rank approximation of matrices that is superior to the existing state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL. Expand
Robust Matrix Factorization with Unknown Noise
• Mathematics, Computer Science
• 2013 IEEE International Conference on Computer Vision
• 2013
A low-rank matrix factorization problem with a Mixture of Gaussians (MoG) noise, which is a universal approximator for any continuous distribution, and hence is able to model a wider range of real noise distributions. Expand
Weighted Low-Rank Approximation of Matrices and Background Modeling
This work demonstrates through extensive experiments that by inserting a simple weight in the Frobenius norm, it can be made robust to the outliers similar to the $\ell_1$ norm. Expand