Expander ℓ0-Decoding

  title={Expander ℓ0-Decoding},
  author={Rodrigo Mendoza-Smith and Jared Tanner},
Expander Recovery Performance of Bipartite Graphs With Girth Greater Than 4
This paper studies the expander recovery performance of the bipartite graph with girth greater than 4, which can be associated with a binary matrix with column correlations equal to either 0 or 1.
On the Construction of Sparse Matrices From Expander Graphs
A new reduced sample complexity for the number of nonzeros per column of these matrices is derived, precisely d = \mathcal{O}\left(\log_s(N/s) \right)$; this gives insights into why using small d performed well in numerical experiments involving such matrices.
The Permuted Striped Block Model and its Factorization - Algorithms with Recovery Guarantees
The PSB data model is defined as a particular distribution over this class of matrices, motivated by its implications for community detection, provable binary dictionary learning with real valued sparse coding, and blind combinatorial compressed sensing.
Weighted sparse recovery with expanders
We derived the first sparse recovery guarantees for weighted l1 minimization with sparse random matrices and the class of weighted sparse signals, using a weighted versions of the null space property
A Robust Parallel Algorithm for Combinatorial Compressed Sensing
The robust-<inline-formula><tex-math notation="LaTeX">$\ell _0$</tex-Math></inline- formula> decoding algorithm is presented, which robustifies parallel-< inline-formulas><tex -math notation=LaTeX>$ £ell_0$> when the sketch is corrupted by additive noise.
Sparse matrices for weighted sparse recovery
We derived the first sparse recovery guarantees for weighted $\ell_1$ minimization with sparse random matrices and the class of weighted sparse signals, using a weighted versions of the null space


Sudocodes ߝ Fast Measurement and Reconstruction of Sparse Signals
This work proposes a non-adaptive construction of a sparse Phi comprising only the values 0 and 1; hence the computation of y involves only sums of subsets of the elements of x.
Efficient erasure correcting codes
A simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs is introduced and a simple criterion involving the fractions of nodes of different degrees on both sides of the graph is obtained which is necessary and sufficient for the decoding process to finish successfully with high probability.
Vanishingly Sparse Matrices and Expander Graphs, With Application to Compressed Sensing
This work revisits the probabilistic construction of sparse random matrices where each column has a fixed number of nonzeros whose row indices are drawn uniformly at random with replacement and presents formulas for the expected cardinality of the set of neighbors for these graphs.
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.
Neighborly Polytopes And Sparse Solution Of Underdetermined Linear Equations
For large d, the overwhelming majority of systems of linear equations with d equations and 4d/3 unknowns have the following property: if there is a solution with fewer than .49d nonzeros, it is the unique minimum ` solution.
Verification Decoding of High-Rate LDPC Codes With Applications in Compressed Sensing
The high-rate scaling law for MP decoding of LDPC codes on the binary erasure channel and the q-ary symmetric channel is derived and leads to the result that strictly sparse signals can be reconstructed efficiently with high probability using a constant oversampling ratio.
Efficient and Robust Compressed Sensing using High-Quality Expander Graphs
This paper improves upon the result shown earlier by considering expander graphs with expansion coefficient beyond 3/4 and shows that, with the same number of measurements, only $O(k)$ recovery iterations are required, which is a significant improvement when $n$ is large.
Model-Based Compressive Sensing
A model-based CS theory is introduced that parallels the conventional theory and provides concrete guidelines on how to create model- based recovery algorithms with provable performance guarantees and a new class of structured compressible signals along with a new sufficient condition for robust structured compressable signal recovery that is the natural counterpart to the restricted isometry property of conventional CS.
Performance comparisons of greedy algorithms in compressed sensing
A large‐scale empirical investigation into the behavior of three of the state of the art greedy algorithms: Normalized Iterative Hard Thresholding (NIHT), Hard Th thresholding Pursuit (HTP), and CSMPSP is conducted.
Combining geometry and combinatorics: A unified approach to sparse signal recovery
A unification of geometric and combinatorial approaches to sparse signal recovery is presented, which results in new measurement matrix constructions and algorithms for signal recovery which are superior in either the number of measurements or computational efficiency of decoders.