• Corpus ID: 14523288

Robust Matrix Decomposition with Outliers

@article{Hsu2010RobustMD,
  title={Robust Matrix Decomposition with Outliers},
  author={Daniel J. Hsu and Sham M. Kakade and Tong Zhang},
  journal={ArXiv},
  year={2010},
  volume={abs/1011.1518}
}
Suppose a given observation matrix can be decomposed as the sum of a low-rank matrix and a sparse matrix (outliers), and the goal is to recover these individual components from the observed sum. Such additive decompositions have applications in a variety of numerical problems including system identification, latent variable graphical modeling, and principal components analysis. We study conditions under which recovering such a decomposition is possible via a combination of $\ell_1$ norm and… 
Robust PCA With Partial Subspace Knowledge
TLDR
This work proposes here a simple but useful modification of the PCP idea, called modified-PCP, that allows to use partial knowledge about the column space of the low rank matrix L and derives its correctness result, which shows that, when the available subspace knowledge is accurate, modified- PCP indeed requires significantly weaker incoherence assumptions than PCP.
High Dimensional Low Rank Plus Sparse Matrix Decomposition
TLDR
A scalable subspace-pursuit approach that transforms the decomposition problem to a subspace learning problem is proposed and adaptive sampling makes the required number of sampled columns/rows invariant to the distribution of the data.
Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions
TLDR
A general theorem is derived that bounds the Frobenius norm error for an estimate of the pair of high-dimensional matrix decomposition problems obtained by solving a convex optimization problem that combines the nuclear norm with a general decomposable regularizer.
Compressive principal component pursuit
TLDR
This work analyzes the performance of the natural convex heuristic for recovering a target matrix that is a superposition of low-rank and sparse components, from a small set of linear measurements, and proves that this heuristic exactly recovers low- rank and sparse terms.
Square Root Principal Component Pursuit: Tuning-Free Noisy Robust Matrix Recovery
TLDR
The authors' simulations corroborate the claim that a universal choice of the regularization parameter yields near optimal performance across a range of noise levels, indicating that the proposed method outperforms the (somewhat loose) bound proved here.
Low-Rank Matrix Recovery From Errors and Erasures
TLDR
A new unified performance guarantee on when minimizing nuclear norm plus l1 norm succeeds in exact recovery is provided, which provides the first guarantees for 1) recovery when the authors observe a vanishing fraction of entries of a corrupted matrix, and 2) deterministic matrix completion.
Recursive Robust PCA or Recursive Sparse Recovery in Large but Structured Noise
TLDR
A simple modification of the original ReProCS idea, which assumes knowledge of a subspace change model on the Lt's, and shows that the proposed approach can exactly recover the support set of St at all times, and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value.
Robust Matrix Completion with Corrupted Columns
TLDR
The results show that with a vanishing fraction of observed entries, it is nevertheless possible to succeed in performing matrix completion, even when the number of corrupted columns grows, and means that manipulators can act in a completely adversarial manner.
Performance guarantees for undersampled recursive sparse recovery in large but structured noise
TLDR
This work introduces a solution to the problem of recursively reconstructing a time sequence of sparse vectors St from measurements of the form Mt = ASt +BLt where A and B are known measurement matrices, and Lt lies in a slowly changing low dimensional subspace.
Low-Rank Structure Learning via Nonconvex Heuristic Recovery
TLDR
Experimental results on low-rank structure learning demonstrate that the nonconvex heuristic methods, especially the log-sum heuristic recovery algorithm, generally perform much better than the convex-norm-based method (0 <; p <; 1) for both data with higher rank and with denser corruptions.
...
...

References

SHOWING 1-10 OF 15 REFERENCES
Robust principal component analysis?
TLDR
It is proved that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, this suggests the possibility of a principled approach to robust principal component analysis.
Stable Principal Component Pursuit
TLDR
This result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level, the first result that shows the classical Principal Component Analysis, optimal for small i.i.d. noise, can be made robust to gross sparse errors.
Dense error correction for low-rank matrices via Principal Component Pursuit
TLDR
It is shown that the same convex program, with a slightly improved weighting parameter, exactly recovers the low-rank matrix even if “almost all” of its entries are arbitrarily corrupted, provided the signs of the errors are random.
Exact matrix completion via convex optimization
TLDR
It is proved that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries, and that objects other than signals and images can be perfectly reconstructed from very limited information.
Recovering Low-Rank Matrices From Few Coefficients in Any Basis
  • D. Gross
  • Computer Science
    IEEE Transactions on Information Theory
  • 2011
TLDR
It is shown that an unknown matrix of rank can be efficiently reconstructed from only randomly sampled expansion coefficients with respect to any given matrix basis, which quantifies the “degree of incoherence” between the unknown matrix and the basis.
Latent variable graphical model selection via convex optimization
TLDR
The modeling framework can be viewed as a combination of dimensionality reduction and graphical modeling (to capture remaining statistical structure not attributable to the latent variables) and it consistently estimates both the number of hidden components and the conditional graphical model structure among the observed variables.
Regression Shrinkage and Selection via the Lasso
TLDR
A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Rank-Sparsity Incoherence for Matrix Decomposition
TLDR
This paper decomposes a matrix formed by adding an unknown sparse matrix to an unknown low-rank matrix into its sparse and low- rank components.
Handbook of geometry of Banach spaces
Basic concepts in the geometry of Banach spaces (W.B. Johnson, J. Lindenstrauss). Positive operators (Y.A. Abramovitch, C.D. Aliprantis). Lp spaces (D. Alspach, E. Odell). Convex geometry and
...
...