Efficient L1-Norm Principal-Component Analysis via Bit Flipping

@article{Markopoulos2017EfficientLP,
  title={Efficient L1-Norm Principal-Component Analysis via Bit Flipping},
  author={Panos P. Markopoulos and Sandipan Kundu and Shubham Chamadia and Dimitris A. Pados},
  journal={IEEE Transactions on Signal Processing},
  year={2017},
  volume={65},
  pages={4252-4264}
}
It was shown recently that the <inline-formula><tex-math notation="LaTeX">$K$</tex-math></inline-formula> L1-norm principal components (L1-PCs) of a real-valued data matrix <inline-formula><tex-math notation="LaTeX">$\mathbf X \in \mathbb {R}^{D \times N}$</tex-math></inline-formula> (<inline-formula><tex-math notation="LaTeX">$N$</tex-math> </inline-formula> data samples of <inline-formula><tex-math notation="LaTeX">$D$</tex-math></inline-formula> dimensions) can be exactly calculated with… 

Figures and Tables from this paper

Grassmann Manifold Optimization for Fast $L_1$-Norm Principal Component Analysis

The proposed Grassmann manifold optimization method is computationally more efficient and produces results with lower reprojection error than previous methods, relatively independent of dataset size and well suited for various big-data problems commonly encountered today.

Reduced-Rank L1-Norm Principal-Component Analysis With Performance Guarantees

The proposed method combines the denoising capabilities and low computation cost of standard PCA with the outlier-resistance of L1-PCA.

GrIP-PCA: Grassmann Iterative P-Norm Principal Component Analysis

The Grassmann Iterative P-norm PCA (GrIP-PCA) method is presented, which uses an iterative Grassmann manifold optimization approach to find the solution to the highly non-convex PCA problem.

Revisiting L2,1-Norm Robustness With Vector Outlier Regularization

  • Bo JiangC. Ding
  • Computer Science
    IEEE Transactions on Neural Networks and Learning Systems
  • 2020
A new vector outlier regularization (VOR) framework is proposed and an equivalent continuous formulation is proved, based on which it is proved that the <inline-formula> <tex-math notation="LaTeX">$L_{2,1}$ </tex- Math>-norm function is the limiting case of the proposed VOR function.

Towards Robust Discriminative Projections Learning via Non-Greedy <inline-formula><tex-math notation="LaTeX">$\ell _{2,1}$</tex-math><alternatives><mml:math><mml:msub><mml:mi>ℓ</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math><inline-graphic xl

A novel robust LDA measured by an efficient iterative optimization algorithm to solve a general ratio minimization problem, and then rigorously prove its convergence.

Detecting Anomaly in Chemical Sensors via L1-Kernel-Based Principal Component Analysis

This letter introduces a new multiplication-free kernel, which is related to the <inline-formula><tex-math notation="LaTeX">$\ell _{1}$</tex- Math>-norm for the anomaly detection task, and shows that the kernel-PCA method achieves a higher area under curvature score than the baseline regular PCA method.

Optimal sparse L1-norm principal-component analysis

  • Shubham ChamadiaD. Pados
  • Computer Science
    2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2017
We present an algorithm that computes exactly (optimally) the S-sparse (1≤S<D) maximum-L<inf>1</inf>-norm-projection principal component of a real-valued data matrix X ∈ ℝ<sup>D×N</sup> that contains

A PTAS for $\ell_p$-Low Rank Approximation

It is observed that there is no approximation algorithm for the Generalized Binary $\ell_0$-Rank-$k$ Approximation problem running in time, and for finite fields of constant size, under the ETH, that any fixed constant factor approximation algorithm requires $2^{k^{\delta}}$ time for a constant $\delta > 0.

Novel Algorithms for Exact and Efficient L1-NORM-BASED Tucker2 Decomposition

An efficient (quadratic-cost/near-exact) algorithm that approximates the solution to rank-1 L1- TUCKER2 by means of a converging sequence of optimal single-bit flips is developed, accompanied by formal convergence proof and complexity analysis.

Average Case Column Subset Selection for Entrywise 퓁1-Norm Loss

This is the first algorithm of any kind for achieving a $(1+\epsilon)-approximate column subset selection to the entrywise $\ell_1$-norm loss low rank approximation.
...

References

SHOWING 1-10 OF 54 REFERENCES

L1-Norm Principal-Component Analysis via Bit Flipping

L1-BF is presented: a novel, near-optimal algorithm that calculates the K L1-PCs of X with cost O (NDmin{N, D} + N2(K4 + DK2) + DNK3), comparable to that of standard (L2-norm) Principal-Component Analysis.

Some Options for L1-subspace Signal Processing

It is proved that the case of engineering interest of fixed dimension D and asymptotically large sample support N is not NP-hard and an optimal algorithm of complexity of complexity $O(N^D)$ is presented.

Fast computation of the L1-principal component of real-valued data

This paper presents for the first time in the literature a fast greedy single-bit-flipping conditionally optimal iterative algorithm for the computation of the L1 principal component with complexity O(N3) and demonstrates the effectiveness of the developed algorithm with applications to the general field of data dimensionality reduction and direction-of-arrival estimation.

Optimal Algorithms for L1-subspace Signal Processing

This work starts with the computation of the L1 maximum-projection principal component of a data matrix containing N signal samples of dimension D and presents in explicit form an optimal algorithm of computational cost 2N for the case where the sample size is less than the fixed dimension.

Fast parallel processing using GPU in computing L1-PCA bases

This paper attempts to accelerate the computation of the L1-PCA bases using GPU by proposing a fast PCA-L1 algorithm providing identical bases in terms of theoretical approach, and decreased computational time roughly to a quarter.

A pure L1L1-norm principal component analysis

Efficient computation of robust low-rank matrix approximations in the presence of missing data using the L1 norm

This paper presents a method for calculating the low-rank factorization of a matrix which minimizes the L1 norm in the presence of missing data and shows that the proposed algorithm can be efficiently implemented using existing optimization software.

Robust Principal Component Analysis with Non-Greedy l1-Norm Maximization

Experimental results on real world datasets show that the nongreedy method always obtains much better solution than that of the greedy method, and then a robust principal component analysis with non-greedy l1-norm maximization is proposed.

R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization

Experiments on several real-life datasets show R1-PCA can effectively handle outliers and it is shown that L1-norm K-means leads to poor results while R2-K-MEans outperforms standard K-Means.
...