Robust principal component analysis?

@article{Cands2011RobustPC,
  title={Robust principal component analysis?},
  author={Emmanuel J. Cand{\`e}s and Xiaodong Li and Yi Ma and John Wright},
  journal={J. ACM},
  year={2011},
  volume={58},
  pages={11:1-11:37}
}
This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of… 

Figures and Tables from this paper

Robust principal component analysis?: Recovering low-rank matrices from sparse errors

The methodology and results suggest a principled approach to robust principal component analysis, since they show that one can efficiently and exactly recover the principal components of a low-rank data matrix even when a positive fraction of the entries are corrupted.

Stable Principal Component Pursuit

This result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level, the first result that shows the classical Principal Component Analysis, optimal for small i.i.d. noise, can be made robust to gross sparse errors.

Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization

It is proved that most matrices A can be efficiently and exactly recovered from most error sign-and-support patterns by solving a simple convex program, for which it is given a fast and provably convergent algorithm.

A Robust Local Linear Data Decomposition

A new decomposition of Principal Component Pursuit represents the data matrix as the sum of a sparse additive component and a structured sparse representation using itself as a dictionary, which can describe data lying within a nonlinear manifold.

Robust principal component analysis via re-weighted minimization algorithms

  • D. KatselisC. Beck
  • Computer Science
    2015 54th IEEE Conference on Decision and Control (CDC)
  • 2015
The proposed methods perform at least as well as the state-of-the-art schemes for Robust PCA, while they allow for larger rank and sparsity regimes of the component matrices under exact recovery requirements.

Efficient algorithms for robust and stable principal component pursuit problems

Numerical results on problems with millions of variables and constraints such as foreground extraction from surveillance video, shadow and specularity removal from face images and video denoising from heavily corrupted data show that the efficient algorithms developed are competitive to current state-of-the-art solvers for RPCP and SPCP in terms of accuracy and speed.

Solving Principal Component Pursuit in Linear Time via $l_1$ Filtering

It is proved that under some suitable conditions, this problem can be exactly solved by principal component pursuit (PCP), i.e., minimizing a combination of nuclear norm and l_1norm, and a novel algorithm, called $l_1$ filtering, is proposed, which is the first algorithm that can solve a nuclear norm minimization problem in linear time.

Real-time Robust Principal Components' Pursuit

  • Chenlu QiuN. Vaswani
  • Computer Science
    2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2010
A solution that automatically handles correlated sparse outliers is proposed that is motivated as a tool for video surveillance applications with the background image sequence forming the low rank part and the moving objects/persons/abnormalities forming the sparse part.

Robust Principal Component Analysis with Missing Data

This paper proposes a robust principal component analysis (RPCA) plus matrix completion framework to recover low-rank and sparse matrices from missing and grossly corrupted observations and develops two alternating direction augmented Lagrangian (ADAL) algorithms to efficiently solve the proposed problems.
...

References

SHOWING 1-10 OF 70 REFERENCES

Fast Convex Optimization Algorithms for Exact Recovery of a Corrupted Low-Rank Matrix

Two complementary approaches for solving the problem of recovering a low-rank matrix with a fraction of its entries arbitrarily corrupted are developed and compared, both several orders of magnitude faster than the previous state-of-the-art algorithm for this problem.

Matrix Completion With Noise

This paper surveys the novel literature on matrix completion and introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise, and shows that, in practice, nuclear-norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples.

A Singular Value Thresholding Algorithm for Matrix Completion

This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.

Robust L/sub 1/ norm factorization in the presence of outliers and missing data by alternative convex programming

  • Qifa KeT. Kanade
  • Computer Science
    2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05)
  • 2005
This paper forms matrix factorization as a L/sub 1/ norm minimization problem that is solved efficiently by alternative convex programming that is robust without requiring initial weighting, handles missing data straightforwardly, and provides a framework in which constraints and prior knowledge can be conveniently incorporated.

Exact Matrix Completion via Convex Optimization

It is proved that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries, and that objects other than signals and images can be perfectly reconstructed from very limited information.

Sparse and low-rank matrix decomposition via alternating direction method

The ADM approach is proposed for accomplishing the sparse and low-rank recovery, by taking full exploitation to the high-level separable structure of the convex relaxation problem.

The Power of Convex Relaxation: Near-Optimal Matrix Completion

This paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors).

Recovering Low-Rank Matrices From Few Coefficients in Any Basis

  • D. Gross
  • Computer Science
    IEEE Transactions on Information Theory
  • 2011
It is shown that an unknown matrix of rank can be efficiently reconstructed from only randomly sampled expansion coefficients with respect to any given matrix basis, which quantifies the “degree of incoherence” between the unknown matrix and the basis.

Compressive Sensing for Background Subtraction

A method to directly recover background subtracted images using CS and its applications in some communication constrained multi-camera computer vision problems is described and its approach is suitable for image coding in communication constrained problems.
...