Fast and Provable Tensor Robust Principal Component Analysis via Scaled Gradient Descent

@article{Dong2022FastAP,
  title={Fast and Provable Tensor Robust Principal Component Analysis via Scaled Gradient Descent},
  author={Harry Dong and Tian Tong and Cong Ma and Yuejie Chi},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.09109}
}
An increasing number of data science and machine learning problems rely on computation with tensors, which better capture the multi-way relationships and interactions of data than matrices. When tapping into this critical advantage, a key challenge is to develop computationally efficient and provably correct algorithms for extracting useful information from tensor data that are simultaneously robust to corruptions and ill-conditioning. This paper tackles tensor robust principal component analysis… 

Figures from this paper

References

SHOWING 1-10 OF 45 REFERENCES

Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation from Incomplete Measurements

TLDR
This paper develops a scaled gradient descent (ScaledGD) algorithm to directly recover the tensor factors with tailored spectral initializations and shows that it provably converges at a linear rate independent of the condition number of the ground truth tensor for two canonical problems.

Tensor vs. Matrix Methods: Robust Tensor Decomposition under Block Sparse Perturbations

TLDR
It is established that tensor methods can tolerate a higher level of gross corruptions compared to matrix methods.

Accelerating ILL-Conditioned Robust Low-Rank Tensor Regression

  • Tian TongCong MaYuejie Chi
  • Computer Science
    ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2022
TLDR
A provably efficient algorithm that directly estimates the tensor factors by solving a nonsmooth and nonconvex composite optimization problem that minimizes the least absolute deviation loss is proposed.

Robust Low-Rank Tensor Recovery: Models and Algorithms

TLDR
This paper proposes tailored optimization algorithms with global convergence guarantees for solving both the constrained and the Lagrangian formulations of the problem and proposes a nonconvex model that can often improve the recovery results from the convex models.

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization

TLDR
This work proves that under certain suitable assumptions, it can recover both the low-rank and the sparse components exactly by simply solving a convex program whose objective is a weighted combination of the tensor nuclear norm and the l1-norm.

Generalized Low-rank plus Sparse Tensor Estimation by Fast Riemannian Optimization

TLDR
A fast algorithm is proposed by integrating the Riemannian gradient descent and a novel gradient pruning procedure that achieves non-trivial error bounds for heavy-tailed tensor PCA whenever the noise has a finite 2 + ε moment.

Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization

TLDR
It is proved that most matrices A can be efficiently and exactly recovered from most error sign-and-support patterns by solving a simple convex program, for which it is given a fast and provably convergent algorithm.

On Tensor Completion via Nuclear Norm Minimization

TLDR
A convex optimization approach to tensor completion is investigated by directly minimizing a tensor nuclear norm and it is proved that this leads to an improved sample size requirement and a series of algebraic and probabilistic techniques, which may be of independent interests and could be useful in other tensor-related problems.

Low-Rank Matrix Recovery With Scaled Subgradient Methods: Fast and Robust Convergence Without the Condition Number

Many problems in data science can be treated as estimating a low-rank matrix from highly incomplete, sometimes even corrupted, observations. One popular approach is to resort to matrix factorization,

Tensor Robust Principal Component Analysis: Better recovery with atomic norm regularization

TLDR
The results improve on existing performance guarantees for tensor-RPCA, including those for matrix RPCA, and show that atomic-norm regularization provides better recovery for Tensor-structured data sets than other approaches based on matricization.