# Algorithmic linear dimension reduction in the l_1 norm for sparse vectors

@article{Gilbert2006AlgorithmicLD, title={Algorithmic linear dimension reduction in the l\_1 norm for sparse vectors}, author={Anna C. Gilbert and Martin Strauss and Joel A. Tropp and Roman Vershynin}, journal={ArXiv}, year={2006}, volume={abs/cs/0608079} }

Using a number of different algorithms, we can recover approximately a sparse signal with limited noise, i.e, a vector of length d with at least d−m zeros or near-zeros, using little more than m log(d) nonadaptive linear measurements rather than the d measurements needed to recover an arbitrary signal of length d. We focus on two important properties of such algorithms. • Uniformity. A single measurement matrix should work simultaneously for all signals. • Computational Efficiency. The time to… Expand

#### 160 Citations

Sublinear time, measurement-optimal, sparse recovery for all

- Computer Science, Mathematics
- SODA
- 2012

The first algorithm for this problem that uses the optimum number of measurements (up to constant factors) and runs in sublinear time o(N) when k is sufficiently less than N is given. Expand

J ul 2 01 1 Sublinear Time , Measurement-Optimal , Sparse Recovery ForAll

- 2021

An approximate sparse recovery s stem inl1 norm makes a small number of measurements of a noisy vector with at most k large entries and recovers those h avy hittersapproximately. Formally, it… Expand

Sparse Recovery Using Sparse Random Matrices

- Computer Science
- LATIN
- 2010

An overview of the results in the area of sequential Sparse Matching Pursuit, and describes a new algorithm, called “SSMP”, which works well on real data, with the recovery quality often outperforming that of more complex algorithms, such as l1 minimization. Expand

Recovering K-sparse N-length vectors in O(K log N) time: Compressed sensing using sparse-graph codes

- Computer Science
- 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2016

A new design framework is proposed that simultaneously leads to low measurement cost and low computational cost, and guarantees successful recovery with high probability using O(k log N) measurements with a computational complexity of O(K log N). Expand

Information-theoretic limits on sparse support recovery: Dense versus sparse measurements

- Mathematics, Computer Science
- 2008 IEEE International Symposium on Information Theory
- 2008

The analysis allows general scaling of the quadruplet (n, p, k, gamma), and reveals three different regimes, corresponding to whether measurement sparsity has no effect, a minor effect, or a dramatic effect on the information theoretic limits of the subset recovery problem. Expand

Information-Theoretic Limits on Sparse Signal Recovery: Dense versus Sparse Measurement Matrices

- Mathematics, Computer Science
- IEEE Transactions on Information Theory
- 2010

This paper provides sharper necessary conditions for exact support recovery using general (including non-Gaussian) dense measurement matrices, and proves necessary conditions on the number of observations n required for asymptotically reliable recovery using a class of ¿-sparsified measurementMatrices. Expand

For-All Sparse Recovery in Near-Optimal Time

- Computer Science, Mathematics
- ACM Trans. Algorithms
- 2017

An approximate sparse recovery system in ℓ1 norm consists of parameters k, ε, N; an m-by-N measurement Φ; and a recovery algorithm R, which considers the “for all” model, in which a single matrix Φ, possibly “constructed” non-explicitly using the probabilistic method, is used for all signals x. Expand

Advances in sparse signal recovery methods

- Computer Science
- 2009

This thesis focuses on sparse recovery, where the goal is to recover sparse vectors exactly, and to approximately recover nearly-sparse vectors, and introduces a class of binary sparse matrices as valid measurement matrices that provide a non-linear sparse recovery scheme. Expand

SHO-FA: Robust compressive sensing with order-optimal complexity, measurements, and bits

- Mathematics, Computer Science
- 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
- 2012

The SHO-FA algorithm is the first to simultaneously have the following properties: it requires only O(k) measurements, the bit-precision of each measurement and each arithmetic operation is O (log(n) + P) (here 2-P corresponds to the desired relative error in the reconstruction of x), the computational complexity of decoding is O( k) arithmetic operations, and the reconstruction goal is simply to recover a single component of x instead of all of x. Expand

Sudocodes : Low Complexity Algorithms for Compressive Sampling and Reconstruction

Sudocodes are new compressive sampling schemes for measurement and reconstruction of sparse signals using algorithms on graphs. Consider a sparse signal x ∈ RN containing K N large coefficients and… Expand

#### References

SHOWING 1-10 OF 48 REFERENCES

Combinatorial Algorithms for Compressed Sensing

- Mathematics
- 2006

In sparse approximation theory, the fundamental problem is to reconstruct a signal AisinRn from linear measurements (A,psii) with respect to a dictionary of psii's. Recently, there is focus on the… Expand

Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements

- Mathematics
- 2006 40th Annual Conference on Information Sciences and Systems
- 2006

This paper proves best known guarantees for exact reconstruction of a sparse signal f from few non-adaptive universal linear measurements. We consider Fourier measurements (random sample of… Expand

Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

- Mathematics, Computer Science
- IEEE Transactions on Information Theory
- 2006

If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. Expand

Reconstruction and subgaussian processes

- Mathematics
- 2005

Abstract This Note presents a randomized method to approximate any vector v from some set T ⊂ R n . The data one is given is the set T, and k scalar products ( 〈 X i , v 〉 ) i = 1 k , where ( X i ) i… Expand

Geometric approach to error-correcting codes and reconstruction of signals

- Mathematics
- 2005

The results of this paper can be stated in three equivalent ways—in terms of the sparse recovery problem, the error-correction problem, and the problem of existence of certain extremal (neighborly)… Expand

Neighborly Polytopes And Sparse Solution Of Underdetermined Linear Equations

- Computer Science
- 2005

For large d, the overwhelming majority of systems of linear equations with d equations and 4d/3 unknowns have the following property: if there is a solution with fewer than .49d nonzeros, it is the unique minimum ` solution. Expand

Near-optimal sparse fourier representations via sampling

- Computer Science, Mathematics
- STOC '02
- 2002

An algorithm for finding a Fourier representation of B for a given discrete signal signal A, such that A is within the factor (1 +ε) of best possible $\|\signal-\repn_\opt\|_2^2$. Expand

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

- Mathematics
- 2006 40th Annual Conference on Information Sciences and Systems
- 2006

The ubiquitous least squares method for systems of linear equations returns solutions which typically have all non-zero entries. However, solutions with the least number of non-zeros allow for… Expand

Sparse nonnegative solution of underdetermined linear equations by linear programming.

- Mathematics, Medicine
- Proceedings of the National Academy of Sciences of the United States of America
- 2005

It is shown that outward k-neighborliness is equivalent to the statement that, whenever y = Ax has a non negative solution with at most k nonzeros, it is the nonnegative solution to y =Ax having minimal sum. Expand

Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit

- Mathematics, Computer Science
- IEEE Transactions on Information Theory
- 2007

This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln… Expand