Efficient L1-Norm Principal-Component Analysis via Bit Flipping

@article{Markopoulos2017EfficientLP,
  title={Efficient L1-Norm Principal-Component Analysis via Bit Flipping},
  author={Panos P. Markopoulos and S. Kundu and Shubham Chamadia and D. Pados},
  journal={IEEE Transactions on Signal Processing},
  year={2017},
  volume={65},
  pages={4252-4264}
}
It was shown recently that the <inline-formula><tex-math notation="LaTeX">$K$</tex-math></inline-formula> L1-norm principal components (L1-PCs) of a real-valued data matrix <inline-formula><tex-math notation="LaTeX">$\mathbf X \in \mathbb {R}^{D \times N}$</tex-math></inline-formula> (<inline-formula><tex-math notation="LaTeX">$N$</tex-math> </inline-formula> data samples of <inline-formula><tex-math notation="LaTeX">$D$</tex-math></inline-formula> dimensions) can be exactly calculated with… Expand
Grassmann Manifold Optimization for Fast $L_1$-Norm Principal Component Analysis
TLDR
The proposed Grassmann manifold optimization method is computationally more efficient and produces results with lower reprojection error than previous methods, relatively independent of dataset size and well suited for various big-data problems commonly encountered today. Expand
Reduced-Rank L1-Norm Principal-Component Analysis With Performance Guarantees
TLDR
The proposed method combines the denoising capabilities and low computation cost of standard PCA with the outlier-resistance of L1-PCA. Expand
GrIP-PCA: Grassmann Iterative P-Norm Principal Component Analysis
Principal component analysis is one of the most commonly used methods for dimensionality reduction in signal processing. However, the most commonly used PCA formulation is based on theExpand
Revisiting L2,1-Norm Robustness With Vector Outlier Regularization
  • Bo Jiang, C. Ding
  • Medicine, Computer Science
  • IEEE Transactions on Neural Networks and Learning Systems
  • 2020
TLDR
A new vector outlier regularization (VOR) framework is proposed and an equivalent continuous formulation is proved, based on which it is proved that the <inline-formula> <tex-math notation="LaTeX">$L_{2,1}$ </tex- Math>-norm function is the limiting case of the proposed VOR function. Expand
Towards Robust Discriminative Projections Learning via Non-Greedy <inline-formula><tex-math notation="LaTeX">$\ell _{2,1}$</tex-math><alternatives><mml:math><mml:msub><mml:mi>ℓ</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math><inline-graphic xl
Linear Discriminant Analysis (LDA) is one of the most successful supervised dimensionality reduction methods and has been widely used in many real-world applications. However,Expand
Differentially Private Robust Low-Rank Approximation
In this paper, we study the following robust low-rank matrix approximation problem: given a matrix $A \in \R^{n \times d}$, find a rank-$k$ matrix $B$, while satisfying differential privacy, suchExpand
Optimal sparse L1-norm principal-component analysis
  • Shubham Chamadia, D. Pados
  • Mathematics, Computer Science
  • 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2017
We present an algorithm that computes exactly (optimally) the S-sparse (1≤S<D) maximum-L<inf>1</inf>-norm-projection principal component of a real-valued data matrix X ∈ ℝ<sup>D×N</sup> that containsExpand
Novel Algorithms for Exact and Efficient L1-NORM-BASED Tucker2 Decomposition
TLDR
An efficient (quadratic-cost/near-exact) algorithm that approximates the solution to rank-1 L1- TUCKER2 by means of a converging sequence of optimal single-bit flips is developed, accompanied by formal convergence proof and complexity analysis. Expand
Average Case Column Subset Selection for Entrywise 퓁1-Norm Loss
TLDR
This is the first algorithm of any kind for achieving a $(1+\epsilon)-approximate column subset selection to the entrywise $\ell_1$-norm loss low rank approximation. Expand
Computational advances in sparse L1-norm principal-component analysis of multi-dimensional data
  • Shubham Chamadia, D. Pados
  • Computer Science
  • 2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)
  • 2017
TLDR
An efficient suboptimal algorithm of complexity O(N<sup>2</sup>(N + D) is presented and its strong resistance to faulty measurements/outliers in the data matrix is demonstrated. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 68 REFERENCES
L1-Norm Principal-Component Analysis via Bit Flipping
TLDR
L1-BF is presented: a novel, near-optimal algorithm that calculates the K L1-PCs of X with cost O (NDmin{N, D} + N2(K4 + DK2) + DNK3), comparable to that of standard (L2-norm) Principal-Component Analysis. Expand
Some Options for L1-subspace Signal Processing
TLDR
It is proved that the case of engineering interest of fixed dimension D and asymptotically large sample support N is not NP-hard and an optimal algorithm of complexity of complexity $O(N^D)$ is presented. Expand
Optimal Algorithms for L1-subspace Signal Processing
TLDR
This work starts with the computation of the L1 maximum-projection principal component of a data matrix containing N signal samples of dimension D and presents in explicit form an optimal algorithm of computational cost 2N for the case where the sample size is less than the fixed dimension. Expand
Fast computation of the L1-principal component of real-valued data
TLDR
This paper presents for the first time in the literature a fast greedy single-bit-flipping conditionally optimal iterative algorithm for the computation of the L1 principal component with complexity O(N3) and demonstrates the effectiveness of the developed algorithm with applications to the general field of data dimensionality reduction and direction-of-arrival estimation. Expand
Fast parallel processing using GPU in computing L1-PCA bases
TLDR
This paper attempts to accelerate the computation of the L1-PCA bases using GPU by proposing a fast PCA-L1 algorithm providing identical bases in terms of theoretical approach, and decreased computational time roughly to a quarter. Expand
Efficient computation of robust low-rank matrix approximations in the presence of missing data using the L1 norm
TLDR
This paper presents a method for calculating the low-rank factorization of a matrix which minimizes the L1 norm in the presence of missing data and shows that the proposed algorithm can be efficiently implemented using existing optimization software. Expand
Robust Principal Component Analysis with Non-Greedy l1-Norm Maximization
TLDR
Experimental results on real world datasets show that the nongreedy method always obtains much better solution than that of the greedy method, and then a robust principal component analysis with non-greedy l1-norm maximization is proposed. Expand
Improve robustness of sparse PCA by L1-norm maximization
TLDR
This paper proposes a new sparse PCA method that attempts to capture the maximal L"1-norm variance of the data, which is intrinsically less sensitive to noises and outliers. Expand
R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization
TLDR
Experiments on several real-life datasets show R1-PCA can effectively handle outliers and it is shown that L1-norm K-means leads to poor results while R2-K-MEans outperforms standard K-Means. Expand
A Pure L1-norm Principal Component Analysis.
TLDR
Tests show that L1-PCA* is the indicated procedure in the presence of unbalanced outlier contamination and the application of this idea that fits data to subspaces of successively smaller dimension is presented. Expand
...
1
2
3
4
5
...