Corpus ID: 3096

Algorithmic linear dimension reduction in the l_1 norm for sparse vectors

@article{Gilbert2006AlgorithmicLD,
  title={Algorithmic linear dimension reduction in the l\_1 norm for sparse vectors},
  author={Anna C. Gilbert and Martin Strauss and Joel A. Tropp and Roman Vershynin},
  journal={ArXiv},
  year={2006},
  volume={abs/cs/0608079}
}
Using a number of different algorithms, we can recover approximately a sparse signal with limited noise, i.e, a vector of length d with at least d−m zeros or near-zeros, using little more than m log(d) nonadaptive linear measurements rather than the d measurements needed to recover an arbitrary signal of length d. We focus on two important properties of such algorithms. • Uniformity. A single measurement matrix should work simultaneously for all signals. • Computational Efficiency. The time to… Expand
Sublinear time, measurement-optimal, sparse recovery for all
TLDR
The first algorithm for this problem that uses the optimum number of measurements (up to constant factors) and runs in sublinear time o(N) when k is sufficiently less than N is given. Expand
J ul 2 01 1 Sublinear Time , Measurement-Optimal , Sparse Recovery ForAll
An approximate sparse recovery s stem inl1 norm makes a small number of measurements of a noisy vector with at most k large entries and recovers those h avy hittersapproximately. Formally, itExpand
Sparse Recovery Using Sparse Random Matrices
TLDR
An overview of the results in the area of sequential Sparse Matching Pursuit, and describes a new algorithm, called “SSMP”, which works well on real data, with the recovery quality often outperforming that of more complex algorithms, such as l1 minimization. Expand
Recovering K-sparse N-length vectors in O(K log N) time: Compressed sensing using sparse-graph codes
  • Xiao Li, K. Ramchandran
  • Computer Science
  • 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2016
TLDR
A new design framework is proposed that simultaneously leads to low measurement cost and low computational cost, and guarantees successful recovery with high probability using O(k log N) measurements with a computational complexity of O(K log N). Expand
Information-theoretic limits on sparse support recovery: Dense versus sparse measurements
TLDR
The analysis allows general scaling of the quadruplet (n, p, k, gamma), and reveals three different regimes, corresponding to whether measurement sparsity has no effect, a minor effect, or a dramatic effect on the information theoretic limits of the subset recovery problem. Expand
Information-Theoretic Limits on Sparse Signal Recovery: Dense versus Sparse Measurement Matrices
TLDR
This paper provides sharper necessary conditions for exact support recovery using general (including non-Gaussian) dense measurement matrices, and proves necessary conditions on the number of observations n required for asymptotically reliable recovery using a class of ¿-sparsified measurementMatrices. Expand
For-All Sparse Recovery in Near-Optimal Time
TLDR
An approximate sparse recovery system in ℓ1 norm consists of parameters k, ε, N; an m-by-N measurement Φ; and a recovery algorithm R, which considers the “for all” model, in which a single matrix Φ, possibly “constructed” non-explicitly using the probabilistic method, is used for all signals x. Expand
Advances in sparse signal recovery methods
TLDR
This thesis focuses on sparse recovery, where the goal is to recover sparse vectors exactly, and to approximately recover nearly-sparse vectors, and introduces a class of binary sparse matrices as valid measurement matrices that provide a non-linear sparse recovery scheme. Expand
SHO-FA: Robust compressive sensing with order-optimal complexity, measurements, and bits
TLDR
The SHO-FA algorithm is the first to simultaneously have the following properties: it requires only O(k) measurements, the bit-precision of each measurement and each arithmetic operation is O (log(n) + P) (here 2-P corresponds to the desired relative error in the reconstruction of x), the computational complexity of decoding is O( k) arithmetic operations, and the reconstruction goal is simply to recover a single component of x instead of all of x. Expand
Sudocodes : Low Complexity Algorithms for Compressive Sampling and Reconstruction
Sudocodes are new compressive sampling schemes for measurement and reconstruction of sparse signals using algorithms on graphs. Consider a sparse signal x ∈ RN containing K N large coefficients andExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 48 REFERENCES
Combinatorial Algorithms for Compressed Sensing
In sparse approximation theory, the fundamental problem is to reconstruct a signal AisinRn from linear measurements (A,psii) with respect to a dictionary of psii's. Recently, there is focus on theExpand
Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements
This paper proves best known guarantees for exact reconstruction of a sparse signal f from few non-adaptive universal linear measurements. We consider Fourier measurements (random sample ofExpand
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
  • E. Candès, T. Tao
  • Mathematics, Computer Science
  • IEEE Transactions on Information Theory
  • 2006
TLDR
If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. Expand
Reconstruction and subgaussian processes
Abstract This Note presents a randomized method to approximate any vector v from some set T ⊂ R n . The data one is given is the set T, and k scalar products ( 〈 X i , v 〉 ) i = 1 k , where ( X i ) iExpand
Geometric approach to error-correcting codes and reconstruction of signals
The results of this paper can be stated in three equivalent ways—in terms of the sparse recovery problem, the error-correction problem, and the problem of existence of certain extremal (neighborly)Expand
Neighborly Polytopes And Sparse Solution Of Underdetermined Linear Equations
TLDR
For large d, the overwhelming majority of systems of linear equations with d equations and 4d/3 unknowns have the following property: if there is a solution with fewer than .49d nonzeros, it is the unique minimum ` solution. Expand
Near-optimal sparse fourier representations via sampling
TLDR
An algorithm for finding a Fourier representation of B for a given discrete signal signal A, such that A is within the factor (1 +ε) of best possible $\|\signal-\repn_\opt\|_2^2$. Expand
Thresholds for the Recovery of Sparse Solutions via L1 Minimization
  • D. Donoho, J. Tanner
  • Mathematics
  • 2006 40th Annual Conference on Information Sciences and Systems
  • 2006
The ubiquitous least squares method for systems of linear equations returns solutions which typically have all non-zero entries. However, solutions with the least number of non-zeros allow forExpand
Sparse nonnegative solution of underdetermined linear equations by linear programming.
  • D. Donoho, J. Tanner
  • Mathematics, Medicine
  • Proceedings of the National Academy of Sciences of the United States of America
  • 2005
TLDR
It is shown that outward k-neighborliness is equivalent to the statement that, whenever y = Ax has a non negative solution with at most k nonzeros, it is the nonnegative solution to y =Ax having minimal sum. Expand
Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
  • J. Tropp, A. Gilbert
  • Mathematics, Computer Science
  • IEEE Transactions on Information Theory
  • 2007
This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m lnExpand
...
1
2
3
4
5
...