A Robust Parallel Algorithm for Combinatorial Compressed Sensing

@article{MendozaSmith2018ARP,
  title={A Robust Parallel Algorithm for Combinatorial Compressed Sensing},
  author={Rodrigo Mendoza-Smith and Jared Tanner and Florian Wechsung},
  journal={IEEE Transactions on Signal Processing},
  year={2018},
  volume={66},
  pages={2167-2177}
}
It was shown in previous work that a vector <inline-formula><tex-math notation="LaTeX">$\mathbf{x} \in \mathbb {R}^n$</tex-math></inline-formula> with at most <inline-formula><tex-math notation="LaTeX">$k < n$</tex-math> </inline-formula> nonzeros can be recovered from an expander sketch <inline-formula><tex-math notation="LaTeX"> $\mathbf{A}\mathbf{x}$</tex-math></inline-formula> in <inline-formula><tex-math notation="LaTeX"> $\mathcal{O}(\text{nnz}(\mathbf{A})\log k)$</tex-math></inline… Expand
The Permuted Striped Block Model and its Factorization - Algorithms with Recovery Guarantees
TLDR
The PSB data model is defined as a particular distribution over this class of matrices, motivated by its implications for community detection, provable binary dictionary learning with real valued sparse coding, and blind combinatorial compressed sensing. Expand
On the Construction of Sparse Matrices From Expander Graphs
TLDR
A new reduced sample complexity for the number of nonzeros per column of these matrices is derived, precisely d = \mathcal{O}\left(\log_s(N/s) \right)$; this gives insights into why using small d performed well in numerical experiments involving such matrices. Expand
Expander ℓ0-Decoding
TLDR
Two new algorithms, Serial- l 0 and Parallel- l0, are introduced for solving a large underdetermined linear system of equations y = A x ∈ R m when it is known that x has at most k m nonzero entries and A is the adjacency matrix of an unbalanced left d-regular expander graph. Expand
Encoder blind combinatorial compressed sensing
TLDR
It is proved that D-EBF recovers both the encoder and sparse coding matrix at the optimal measurement rate with high probability in n, from a near optimal number N = Ω ( n k log (n) ) of measurement vectors. Expand
Robust Multichannel EEG Compressed Sensing in the Presence of Mixed Noise
TLDR
This paper proposes a novel multichannel EEG CS method based on sparse and low rank representation in the presence of mixed noise (SLRMN), which can take both Gaussian noise and impulsive noise into consideration, and develops the alternative direction method of multipliers (ADMM) to solve the proposed SLRMN. Expand
FFOCT imaging based on compressed sensing
TLDR
Based on the high-resolution tomographic image obtained from the organ tissue, compressed sensing theory is used to perform compression reconstruction simulation and it is verified that the amount of data stored in the three-dimensional image of the diseased tissue is effectively reduced without changing the image resolution. Expand
Facets of high-dimensional Gaussian polytopes
We study the number of facets of the convex hull of n independent standard Gaussian points in d-dimensional Euclidean space. In particular, we are interested in the expected number of facets when theExpand
A directly seeking spectrum holes algorithm by compressive sampling
  • Hua Zhao, S. Wang, L. Li
  • Computer Science
  • Int. J. Commun. Syst.
  • 2019

References

SHOWING 1-10 OF 34 REFERENCES
Efficient and Robust Compressed Sensing Using Optimized Expander Graphs
TLDR
This paper improves upon the result shown earlier by considering expander graphs with expansion coefficient beyond 3/4 and shows that, with the same number of measurements, only only 2k recovery iterations are required, which is a significant improvement when <i>n</i> is large. Expand
Combining geometry and combinatorics: A unified approach to sparse signal recovery
TLDR
A unification of geometric and combinatorial approaches to sparse signal recovery is presented, which results in new measurement matrix constructions and algorithms for signal recovery which are superior in either the number of measurements or computational efficiency of decoders. Expand
GPU accelerated greedy algorithms for compressed sensing
TLDR
The software solves high-dimensional problems in fractions of a second which permits large-scale testing at dimensions currently unavailable in the literature, and exhibits up to 70$$\times $$× acceleration over standard Matlab central processing unit implementations using automatic multi-threading. Expand
Exponential Bounds Implying Construction of Compressed Sensing Matrices, Error-Correcting Codes, and Neighborly Polytopes by Random Sampling
  • D. Donoho, J. Tanner
  • Mathematics, Computer Science
  • IEEE Transactions on Information Theory
  • 2010
TLDR
Finite-N bounds on the expected discrepancy between the number of k-faces of the projected polytope AQ and its generator Q, for Q = TN-1 and CN are developed, which imply existence of interesting geometric objects. Expand
Expander ℓ0-Decoding
TLDR
Two new algorithms, Serial- l 0 and Parallel- l0, are introduced for solving a large underdetermined linear system of equations y = A x ∈ R m when it is known that x has at most k m nonzero entries and A is the adjacency matrix of an unbalanced left d-regular expander graph. Expand
Vanishingly Sparse Matrices and Expander Graphs, With Application to Compressed Sensing
TLDR
This work revisits the probabilistic construction of sparse random matrices where each column has a fixed number of nonzeros whose row indices are drawn uniformly at random with replacement and presents formulas for the expected cardinality of the set of neighbors for these graphs. Expand
Iterative Hard Thresholding for Compressed Sensing
TLDR
This paper presents a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem and shows that the algorithm has the following properties - it gives near-optimal error guarantees, and is robust to observation noise. Expand
Performance comparisons of greedy algorithms in compressed sensing
TLDR
A large scale empirical investigation into the behavior of three of the state of the art greedy algorithms: NIHT, HTP, and CSMPSP is conducted and algorithm selection maps presented here are the first of their kind for compressed sensing. Expand
A Mathematical Introduction to Compressive Sensing
TLDR
A Mathematical Introduction to Compressive Sensing gives a detailed account of the core theory upon which the field is build and serves as a reliable resource for practitioners and researchers in these disciplines who want to acquire a careful understanding of the subject. Expand
Normalized Iterative Hard Thresholding: Guaranteed Stability and Performance
TLDR
With this modification, empirical evidence suggests that the algorithm is faster than many other state-of-the-art approaches while showing similar performance, and the modified algorithm retains theoretical performance guarantees similar to the original algorithm. Expand
...
1
2
3
4
...