# Optimal Algorithms for L1-subspace Signal Processing

@article{Markopoulos2014OptimalAF, title={Optimal Algorithms for L1-subspace Signal Processing}, author={Panos P. Markopoulos and George N. Karystinos and Dimitris A. Pados}, journal={IEEE Transactions on Signal Processing}, year={2014}, volume={62}, pages={5046-5058} }

We describe ways to define and calculate L1-norm signal subspaces that are less sensitive to outlying data than L2-calculated subspaces. We start with the computation of the L1 maximum-projection principal component of a data matrix containing N signal samples of dimension D. We show that while the general problem is formally NP-hard in asymptotically large N, D, the case of engineering interest of fixed dimension D and asymptotically large sample size N is not. In particular, for the case…

## 164 Citations

### Some Options for L1-subspace Signal Processing

- Computer ScienceISWCS
- 2013

It is proved that the case of engineering interest of fixed dimension D and asymptotically large sample support N is not NP-hard and an optimal algorithm of complexity of complexity $O(N^D)$ is presented.

### Fast computation of the L1-principal component of real-valued data

- Computer Science2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2014

This paper presents for the first time in the literature a fast greedy single-bit-flipping conditionally optimal iterative algorithm for the computation of the L1 principal component with complexity O(N3) and demonstrates the effectiveness of the developed algorithm with applications to the general field of data dimensionality reduction and direction-of-arrival estimation.

### L1-Norm Principal-Component Analysis via Bit Flipping

- Computer Science2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)
- 2016

L1-BF is presented: a novel, near-optimal algorithm that calculates the K L1-PCs of X with cost O (NDmin{N, D} + N2(K4 + DK2) + DNK3), comparable to that of standard (L2-norm) Principal-Component Analysis.

### Optimal Algorithms for Binary, Sparse, and L 1 -Norm Principal Component Analysis

- Computer Science
- 2014

This work shows that in all these problems, the optimal solution can be obtained in polynomial time if the rank of the data matrix is constant, and presents optimal algorithms that are fully parallelizable and memory efficient, hence readily implementable.

### L1-norm principal-component analysis in L2-norm-reduced-rank data subspaces

- Computer Science, MathematicsCommercial + Scientific Sensing and Imaging
- 2017

Reduced-rank L1-PCA aims at leveraging both the low computational cost of standard PCA and the outlier-resistance of L2-norm-based rank-d approximation, calculable exactly with reduced complexity O(N(d-1)K+1).

### Computational advances in sparse L1-norm principal-component analysis of multi-dimensional data

- Computer Science2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)
- 2017

An efficient suboptimal algorithm of complexity O(N<sup>2</sup>(N + D) is presented and its strong resistance to faulty measurements/outliers in the data matrix is demonstrated.

### Estimating L 1-Norm Best-Fit Lines for Data

- Computer Science
- 2017

This paper presents a procedure to estimate the L1-norm best-fit onedimensional subspace (a line through the origin) to data in < based on an optimization criterion involving linear programming but which can be performed using simple ratios and sortings.

### A Simple and Fast Algorithm for L1-Norm Kernel PCA

- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2020

A novel reformulation of L1-norm kernel PCA is provided through which an equivalent, geometrically interpretable problem is obtained and a “fixed-point” type algorithm that iteratively computes a binary weight for each observation is presented.

### Low rank approximation with entrywise l1-norm error

- Computer ScienceSTOC
- 2017

The first provable approximation algorithms for ℓ1-low rank approximation are given, showing that it is possible to achieve approximation factor α = (logd) #183; poly(k) in nnz(A) + (n+d) poly( k) time, and improving the approximation ratio to O(1) with a poly(nd)-time algorithm.

### Adaptive L1-Norm Principal-Component Analysis With Online Outlier Rejection

- Computer ScienceIEEE Journal of Selected Topics in Signal Processing
- 2018

This paper proposes new methods for both incremental and adaptive L1-PCA, and combines the merits of the first one with the additional ability to track changes in the nominal signal subspace.

## References

SHOWING 1-10 OF 56 REFERENCES

### Robust subspace computation using L1 norm

- Computer Science
- 2003

This paper presents two algorithms to optimize the L1 norm metric: the weighted median algorithm and the quadratic programming algorithm, and shows that it is robust to outliers and can handle missing data.

### Efficient computation of robust low-rank matrix approximations in the presence of missing data using the L1 norm

- Computer Science2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
- 2010

This paper presents a method for calculating the low-rank factorization of a matrix which minimizes the L1 norm in the presence of missing data and shows that the proposed algorithm can be efficiently implemented using existing optimization software.

### Robust Principal Component Analysis with Non-Greedy l1-Norm Maximization

- Computer ScienceIJCAI
- 2011

Experimental results on real world datasets show that the nongreedy method always obtains much better solution than that of the greedy method, and then a robust principal component analysis with non-greedy l1-norm maximization is proposed.

### An efficient algorithm for L1-norm principal component analysis

- Computer Science2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2012

Numerical and visual results show that L1-PCA is consistently better than standard PCA, and the robustness against image occlusions is verified.

### R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization

- Computer ScienceICML
- 2006

Experiments on several real-life datasets show R1-PCA can effectively handle outliers and it is shown that L1-norm K-means leads to poor results while R2-K-MEans outperforms standard K-Means.

### On first-order algorithms for l1/nuclear norm minimization

- Computer Science, MathematicsActa Numerica
- 2013

This paper gives a detailed description of two attractive first-order optimization techniques for solving problems of l1/nuclear norm minimization as ‘optimization beasts’ and discusses the application domains.

### Robust L/sub 1/ norm factorization in the presence of outliers and missing data by alternative convex programming

- Computer Science2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05)
- 2005

This paper forms matrix factorization as a L/sub 1/ norm minimization problem that is solved efficiently by alternative convex programming that is robust without requiring initial weighting, handles missing data straightforwardly, and provides a framework in which constraints and prior knowledge can be conveniently incorporated.

### Linear discriminant analysis using rotational invariant L1 norm

- Computer ScienceNeurocomputing
- 2010