Optimal Algorithms for L1-subspace Signal Processing

@article{Markopoulos2014OptimalAF,
  title={Optimal Algorithms for L1-subspace Signal Processing},
  author={Panos P. Markopoulos and George N. Karystinos and Dimitris A. Pados},
  journal={IEEE Transactions on Signal Processing},
  year={2014},
  volume={62},
  pages={5046-5058}
}
We describe ways to define and calculate L1-norm signal subspaces that are less sensitive to outlying data than L2-calculated subspaces. We start with the computation of the L1 maximum-projection principal component of a data matrix containing N signal samples of dimension D. We show that while the general problem is formally NP-hard in asymptotically large N, D, the case of engineering interest of fixed dimension D and asymptotically large sample size N is not. In particular, for the case… Expand
Some Options for L1-subspace Signal Processing
TLDR
It is proved that the case of engineering interest of fixed dimension D and asymptotically large sample support N is not NP-hard and an optimal algorithm of complexity of complexity $O(N^D)$ is presented. Expand
Fast computation of the L1-principal component of real-valued data
TLDR
This paper presents for the first time in the literature a fast greedy single-bit-flipping conditionally optimal iterative algorithm for the computation of the L1 principal component with complexity O(N3) and demonstrates the effectiveness of the developed algorithm with applications to the general field of data dimensionality reduction and direction-of-arrival estimation. Expand
L1-Norm Principal-Component Analysis via Bit Flipping
TLDR
L1-BF is presented: a novel, near-optimal algorithm that calculates the K L1-PCs of X with cost O (NDmin{N, D} + N2(K4 + DK2) + DNK3), comparable to that of standard (L2-norm) Principal-Component Analysis. Expand
Optimal Algorithms for Binary, Sparse, and L 1 -Norm Principal Component Analysis
TLDR
This work shows that in all these problems, the optimal solution can be obtained in polynomial time if the rank of the data matrix is constant, and presents optimal algorithms that are fully parallelizable and memory efficient, hence readily implementable. Expand
Optimal sparse L1-norm principal-component analysis
  • Shubham Chamadia, D. Pados
  • Mathematics, Computer Science
  • 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2017
We present an algorithm that computes exactly (optimally) the S-sparse (1≤S<D) maximum-L<inf>1</inf>-norm-projection principal component of a real-valued data matrix X ∈ ℝ<sup>D×N</sup> that containsExpand
Computational advances in sparse L1-norm principal-component analysis of multi-dimensional data
  • Shubham Chamadia, D. Pados
  • Computer Science
  • 2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)
  • 2017
TLDR
An efficient suboptimal algorithm of complexity O(N<sup>2</sup>(N + D) is presented and its strong resistance to faulty measurements/outliers in the data matrix is demonstrated. Expand
L1-norm principal-component analysis in L2-norm-reduced-rank data subspaces
Standard Principal-Component Analysis (PCA) is known to be very sensitive to outliers among the processed data.1 On the other hand, it has been recently shown that L1-norm-based PCA (L1-PCA) exhibitsExpand
Low rank approximation with entrywise l1-norm error
TLDR
The first provable approximation algorithms for ℓ1-low rank approximation are given, showing that it is possible to achieve approximation factor α = (logd) #183; poly(k) in nnz(A) + (n+d) poly( k) time, and improving the approximation ratio to O(1) with a poly(nd)-time algorithm. Expand
Estimating L 1-Norm Best-Fit Lines for Data
The general formulation for finding the L1-norm best-fit subspace for a point set in m-dimensions is a nonlinear, nonconvex, nonsmooth optimization problem. In this paper we present a procedure toExpand
Adaptive L1-Norm Principal-Component Analysis With Online Outlier Rejection
TLDR
This paper proposes new methods for both incremental and adaptive L1-PCA, and combines the merits of the first one with the additional ability to track changes in the nominal signal subspace. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 77 REFERENCES
Efficient computation of robust low-rank matrix approximations in the presence of missing data using the L1 norm
TLDR
This paper presents a method for calculating the low-rank factorization of a matrix which minimizes the L1 norm in the presence of missing data and shows that the proposed algorithm can be efficiently implemented using existing optimization software. Expand
Robust subspace computation using L1 norm
Linear subspace has many important applications in computer vision, such as structure from motion, motion estimation, layer extraction, object recognition, and object tracking. Singular ValueExpand
A pure L1L1-norm principal component analysis
TLDR
A procedure called L"1-PCA^* is presented, based on the application of this idea that fits data to subspaces of successively smaller dimension that is implemented and tested on a diverse problem suite. Expand
Robust Principal Component Analysis with Non-Greedy l1-Norm Maximization
TLDR
Experimental results on real world datasets show that the nongreedy method always obtains much better solution than that of the greedy method, and then a robust principal component analysis with non-greedy l1-norm maximization is proposed. Expand
Robust L/sub 1/ norm factorization in the presence of outliers and missing data by alternative convex programming
  • Q. Ke, T. Kanade
  • Mathematics, Computer Science
  • 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05)
  • 2005
TLDR
This paper forms matrix factorization as a L/sub 1/ norm minimization problem that is solved efficiently by alternative convex programming that is robust without requiring initial weighting, handles missing data straightforwardly, and provides a framework in which constraints and prior knowledge can be conveniently incorporated. Expand
An efficient algorithm for L1-norm principal component analysis
  • L. Yu, Miao Zhang, C. Ding
  • Mathematics, Computer Science
  • 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2012
TLDR
Numerical and visual results show that L1-PCA is consistently better than standard PCA, and the robustness against image occlusions is verified. Expand
On first-order algorithms for l1/nuclear norm minimization
TLDR
This paper gives a detailed description of two attractive first-order optimization techniques for solving problems of l1/nuclear norm minimization as ‘optimization beasts’ and discusses the application domains. Expand
Improve robustness of sparse PCA by L1-norm maximization
TLDR
This paper proposes a new sparse PCA method that attempts to capture the maximal L"1-norm variance of the data, which is intrinsically less sensitive to noises and outliers. Expand
R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization
TLDR
Experiments on several real-life datasets show R1-PCA can effectively handle outliers and it is shown that L1-norm K-means leads to poor results while R2-K-MEans outperforms standard K-Means. Expand
Linear discriminant analysis using rotational invariant L1 norm
TLDR
A novel rotational invariant L"1 norm (i.e., R"1norm) based discriminant criterion (referred to as DCL"1) is proposed, which better characterizes the intra-class compactness and the inter-class separability by using the rotations of the Frobenius norm. Expand
...
1
2
3
4
5
...