Fast Robust Tensor Principal Component Analysis via Fiber CUR Decomposition *

@article{Cai2021FastRT,
  title={Fast Robust Tensor Principal Component Analysis via Fiber CUR Decomposition *},
  author={HanQin Cai and Zehan Chao and Longxiu Huang and Deanna Needell},
  journal={2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)},
  year={2021},
  pages={189-197}
}
  • HanQin Cai, Zehan Chao, D. Needell
  • Published 23 August 2021
  • Computer Science
  • 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
We study the problem of tensor robust principal component analysis (TRPCA), which aims to separate an underlying low-multilinear-rank tensor and a sparse outlier tensor from their sum. In this work, we propose a fast non-convex algorithm, coined Robust Tensor CUR (RTCUR), for large-scale TRPCA problems. RTCUR considers a framework of alternating projections and utilizes the recently developed tensor Fiber CUR decomposition to dramatically lower the computational complexity. The performance… 

Figures and Tables from this paper

Riemannian CUR Decompositions for Robust Principal Component Analysis

A novel nonconvex Robust PCA algorithm, coined Riemannian CUR (RieCUR), which utilizes the ideas of Riem Mannian optimization and robust CUR decompositions and achieves state-of-the-art performance on RobustPCA both in terms of computational complexity and outlier tolerance.

Learned Robust PCA: A Scalable Deep Unfolding Approach for High-Dimensional Outlier Detection

A scalable and learnable non-convex approach for highdimensional RPCA problems, which is called Learned Robust PCA (LRPCA), which is highly efficient, and its free parameters can be effectively learned to optimize via deep unfolding.

Generalized Pseudoskeleton Decompositions

A BSTRACT . We characterize some variations of pseudoskeleton (also called CUR) de- compositions for matrices andtensorsover arbitraryfields. These characterizations extend previous results to

Early Warning of Real Estate Market Development Risk Based on Network News Topic Mining and Neural Network

  • Shuli Shen
  • Computer Science, Economics
    Mathematical Problems in Engineering
  • 2022
It is proposed that the keyword and word vector based on neural network together as the text vector model can more accurately mine news data, quickly obtain news information, and make prediction and early warning for many industries such as the real estate industry.

Structured Gradient Descent for Fast Robust Low-Rank Hankel Matrix Completion

The convenient Hankel structure is explored and a novel non-convex algorithm, coined Hankel Structured Gradient Descent (HSGD), is proposed for large-scale robust Hankel matrix completion problems, which is highly computing and sample-based compared to the state-of-the-arts.

Cross Tensor Approximation Methods for Compression and Dimensionality Reduction

This paper reviews and extends state-of-the-art deterministic and randomized algorithms for CTA with intuitive graphical illustrations, and discusses several possible generalizations of the CMA to tensors, including CTAs: based on fiber selection, slice-tube selection, and lateral-horizontal slice selection.

References

SHOWING 1-10 OF 35 REFERENCES

Mode-wise Tensor Decompositions: Multi-dimensional Generalizations of CUR Decompositions

This paper studies the characterization, perturbation analysis, and an efficient sampling strategy for two primary tensor CUR approximations, namely Chidori and Fiber CUR, and characterize exact tensorCUR decompositions for low multilinear rank tensors.

Robust Low-Rank Tensor Recovery: Models and Algorithms

This paper proposes tailored optimization algorithms with global convergence guarantees for solving both the constrained and the Lagrangian formulations of the problem and proposes a nonconvex model that can often improve the recovery results from the convex models.

Tensor vs. Matrix Methods: Robust Tensor Decomposition under Block Sparse Perturbations

It is established that tensor methods can tolerate a higher level of gross corruptions compared to matrix methods.

Tensor Robust Principal Component Analysis with a New Tensor Nuclear Norm

This paper considers the Tensor Robust Principal Component Analysis (TRPCA) problem, which aims to exactly recover the low-rank and sparse components from their sum, and proposes a model based on the recently proposed tensor-tensor product, which includes matrix RPCA as a special case.

A Two-Stage Approach to Robust Tensor Decomposition

A two-stage approach that combines HoRPCA with Higher Order SVD (HoSVD) to address challenges of robust tensor decomposition and high computational complexity in multidimensional data analysis.

Rapid Robust Principal Component Analysis: CUR Accelerated Inexact Low Rank Estimation

A novel non-convex algorithm, coined Iterated Robust CUR (IRCUR), is proposed, which dramatically improves the computational efficiency in comparison with the existing algorithms, and achieves this acceleration by employing CUR decomposition when updating the low rank component.

Novel Methods for Multilinear Data Completion and De-noising Based on Tensor-SVD

This paper outlines a tensor nuclear norm penalized algorithm for video completion from missing entries and shows superior performance of the method compared to the matrix robust PCA adapted to this setting as proposed in [4].

Robust CUR Decomposition: Theory and Imaging Applications

This paper examines the qualitative behavior of the Robust CUR decompositions on the benchmark videos and face datasets, and finds that the method works as well as standard Robust PCA while being significantly faster.

Robust Tensor Decomposition with Gross Corruption

It is shown that under certain conditions, the true low-rank tensor as well as the sparse corruption tensor can be recovered simultaneously and the theory can precisely predict the scaling behavior in practice.

Tensor-CUR decompositions for tensor-based data

In the hyperspectral data application, the tensor-CUR decomposition is used to compress the data, and it is shown that classification quality is not substantially reduced even after substantial data compression.