Exactly Robust Kernel Principal Component Analysis

@article{Fan2020ExactlyRK,
  title={Exactly Robust Kernel Principal Component Analysis},
  author={Jicong Fan and Tommy W. S. Chow},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2020},
  volume={31},
  pages={749-761}
}
  • Jicong Fan, T. Chow
  • Published 1 March 2018
  • Computer Science
  • IEEE Transactions on Neural Networks and Learning Systems
Robust principal component analysis (RPCA) can recover low-rank matrices when they are corrupted by sparse noises. In practice, many matrices are, however, of high rank and, hence, cannot be recovered by RPCA. We propose a novel method called robust kernel principal component analysis (RKPCA) to decompose a partially corrupted matrix as a sparse matrix plus a high- or full-rank matrix with low latent dimensionality. RKPCA can be applied to many problems such as noise removal and subspace… 

Figures and Tables from this paper

Robust Non-Linear Matrix Factorization for Dictionary Learning, Denoising, and Clustering
TLDR
This work proposes a new robust nonlinear factorization method called Robust Non-Linear Matrix Factorization (RNLMF), which constructs a dictionary for the data space by factoring a kernelized feature space and scales to matrices with thousands of rows and columns.
Robust Kernel Principal Component Analysis With ℓ2,1-Regularized Loss Minimization
TLDR
This paper proposes a robust method for KPCA with a reformulation in Euclidean space to construct a robust K PCA method, where an error measurement is introduced into the loss function, and $\ell _{2,1}$ -regularization is added to the lossfunction.
Factor Group-Sparse Regularization for Efficient Low-Rank Matrix Recovery
TLDR
Compared to the max norm and the factored formulation of the nuclear norm, factor group-sparse regularizers are more efficient, accurate, and robust to the initial guess of rank.
A Robust Method for Kernel Principal Component Analysis
  • Duo Wang, Toshihisa Tanaka
  • Computer Science
    2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)
  • 2020
TLDR
This work proposes a robust loss function which combined with $l_{2,1}$ regularization for KPCA, inspired by sparse PCA via variable projection, and demonstrates the effectiveness of the proposed method through the numerical example for outlier detection.
Enhanced PSSV for Incomplete Data Analysis
TLDR
This work imposes the variance regularization in PSSV by analyzing P SSV and truncated nuclear norm and shows the effectiveness of the proposed approach on background extraction from the incomplete videos and data, image denoising, and clustering.
Surface Defects Detection Using Non-convex Total Variation Regularized RPCA With Kernelization
TLDR
An unsupervised surface defect detection method based on nonconvex total variation (TV) regularized RPCA with kernelization, named KRPCA-NTV, which outperforms competing methods in terms of accuracy and generalizability is proposed.
Principal Component Analysis Based on T$\ell_1$-norm Maximization
TLDR
Numerical experiments have shown that the proposed PCA based on T$\ell_1$-norm is superior than PCA-p and $\ell_p$SPCA as well as PCA, PCA-$\ell-1$ obviously.
Fault Detection of Wind Turbines by Subspace Reconstruction-Based Robust Kernel Principal Component Analysis
TLDR
A fault detection frame of subspace reconstruction-based robust kernel principal component analysis (SR-RKPCA) model for wind turbines SCADA data to extract nonlinear features under discontinuous interference to improve the stability of the fault detection model of wind turbines.
Robust 2DPCA by Tℓ₁ Criterion Maximization for Image Recognition
TLDR
Two-dimensional principal component analysis based on aninline-formula is proposed, and the experimental results have shown that its performance is superior to that of classical 2DPCA, 2 DPCA-L1, 2D PCAL1-S, N-2-DPC a, G2DPCa, and Angle-2D PCA.
...
1
2
3
...

References

SHOWING 1-10 OF 57 REFERENCES
Reinforced Robust Principal Component Pursuit
TLDR
It is argued that it is necessary to study the presence of outliers not only in the observed data matrix but also in the orthogonal complement subspace of the authentic principal subspace, because the latter can seriously skew the estimation of the principal components.
Robust principal component analysis?
TLDR
It is proved that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, this suggests the possibility of a principled approach to robust principal component analysis.
R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization
TLDR
Experiments on several real-life datasets show R1-PCA can effectively handle outliers and it is shown that L1-norm K-means leads to poor results while R2-K-MEans outperforms standard K-Means.
Robust Subspace Clustering With Complex Noise
TLDR
Experimental results on three commonly used data sets show that the proposed novel optimization model for robust subspace clustering outperforms state-of-the-art subspace clusters methods.
A closed form solution to robust subspace estimation and clustering
TLDR
This work uses an augmented Lagrangian optimization framework, which requires a combination of the proposed polynomial thresholding operator with the more traditional shrinkage-thresholding operator, to solve the problem of fitting one or more subspace to a collection of data points drawn from the subspaces and corrupted by noise/outliers.
Robust Principal Component Analysis with Complex Noise
TLDR
This work proposes a generative RPCA model under the Bayesian framework by modeling data noise as a mixture of Gaussians (MoG), a universal approximator to continuous distributions and thus the model is able to fit a wide range of noises such as Laplacian, Gaussian, sparse noises and any combinations of them.
Robust Kernel Low-Rank Representation
TLDR
This work proposes the robust kernel LRR (RKLRR) approach, and develops an efficient optimization algorithm to solve it based on the alternating direction method, and shows that both the subproblems in the optimization algorithm can be efficiently and exactly solved.
Learning Structured Low-Rank Representation via Matrix Factorization
TLDR
This paper proposes to learn structured LRR by factorizing the nuclear norm regularized matrix, which leads to the proposed non-convex formulation NLRR, a general framework for unifying a variety of popular algorithms including LRR, dictionary learning, robust principal component analysis, sparse subspace clustering, etc.
...
1
2
3
4
5
...