Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

@article{Cands2006NearOptimalSR,
  title={Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?},
  author={Emmanuel J. Cand{\`e}s and Terence Tao},
  journal={IEEE Transactions on Information Theory},
  year={2006},
  volume={52},
  pages={5406-5425}
}
  • E. Candès, T. Tao
  • Published 25 October 2004
  • Computer Science
  • IEEE Transactions on Information Theory
Suppose we are given a vector f in a class FsubeRopf<sup>N </sup>, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr<sub>2</sub>) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear… 
Compressed Sensing
  • D. Donoho
  • Mathematics
    Computer Vision, A Reference Guide
  • 2014
TLDR
It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
The limits of error correction with lp decoding
TLDR
This work investigates the relationship of the fraction of errors and the recovering ability of l<inf>p</inf>-minimization (0 < p ≤ 1) which returns a vector x that minimizes the “l<inf>, p-norm” of y-Ax.
Signal recovery from random projections
TLDR
It is empirically possible to recover an object from about 3M-5M projections onto generically chosen vectors with an accuracy which is as good as that obtained by the ideal M-term wavelet approximation.
Practical Signal Recovery from Random Projections
TLDR
It is demonstrated empirically that it is possible to recover an object from about 3M ‐5M projections onto generically chosen vectors with the same accuracy as the ideal M -term wavelet approximation.
Equivalent mean breakdown points for linear codes and compressed sensing by ℓ1 optimization
  • R. Ashino, R. Vaillancourt
  • Computer Science
    2010 10th International Symposium on Communications and Information Technologies
  • 2010
TLDR
To have equivalently high mean breakdown points by ℓ<inf>1</inf> linear programming, the authors use uniformly distributed random matrices A ∈ ℝ<sup>(m−n)×m</sup> and matrices B ∈ℝ <sup>m×n</Sup> with orthonormal columns spanning the null space of A.
Randomness-in-Structured Ensembles for compressed sensing of images
  • A. Moghadam, H. Radha
  • Computer Science
    2009 16th IEEE International Conference on Image Processing (ICIP)
  • 2009
TLDR
It is proved that RISE-based compressed sensing requires only m = ck samples (where c is not a function of n) to perfectly recover a k-sparse image signal, which is less than the complexity of the popular greedy algorithm Orthogonal Matching Pursuit (OMP).
A sublinear algorithm for sparse reconstruction with ℓ2/ℓ2 recovery guarantees
TLDR
A family of deterministic sensing matrices satisfying the StRIP that are based on Delsarte-Goethals Codes codes (binary chirps) and a k-sparse reconstruction algorithm with sublinear complexity are considered and bounds on the ℓ<inf>2</inf> accuracy of approximation in terms of the measurement noise and the accuracy of the best k-Sparse approximation are derived.
On the Global Minimizers of Real Robust Phase Retrieval With Sparse Noise
TLDR
A class of real robust phase retrieval problems under a Gaussian assumption on the coding matrix when the received signal is sparsely corrupted by noise is studied, showing that the robust phase retrieved objectives are sharp with respect to their minimizers with high probability.
Compressed Sensing: How Sharp Is the Restricted Isometry Property?
TLDR
An asymmetric form of RIP that gives tighter bounds than the usual symmetric one is presented, and the best known bounds on the RIP constants for matrices from the Gaussian ensemble are given.
Algorithmic linear dimension reduction in the l_1 norm for sparse vectors
TLDR
A new method for recovering msparse signals that is simultaneously uniform and quick is developed, and vectors of support m in dimension d can be linearly embedded into O(m log d) dimensions with polylogarithmic distortion.
...
...

References

SHOWING 1-10 OF 83 REFERENCES
Near-optimal sparse fourier representations via sampling
TLDR
An algorithm for finding a Fourier representation of B for a given discrete signal signal A, such that A is within the factor (1 +ε) of best possible $\|\signal-\repn_\opt\|_2^2$.
Decoding by linear programming
TLDR
F can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program) and numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted.
Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements
TLDR
The first guarantees for universal measurements (i.e. which work for all sparse functions) with reasonable constants are proved, based on the technique of geometric functional analysis and probability in Banach spaces.
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
TLDR
It is shown how one can reconstruct a piecewise constant object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
TLDR
The techniques include the use of random proportional embeddings and almost‐spherical sections in Banach space theory, and deviation bounds for the eigenvalues of random Wishart matrices.
Uncertainty principles and ideal atomic decomposition
TLDR
It is proved that if S is representable as a highly sparse superposition of atoms from this time-frequency dictionary, then there is only one such highly sparse representation of S, and it can be obtained by solving the convex optimization problem of minimizing the l/sup 1/ norm of the coefficients among all decompositions.
Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization
  • D. Donoho, Michael Elad
  • Computer Science
    Proceedings of the National Academy of Sciences of the United States of America
  • 2003
TLDR
This article obtains parallel results in a more general setting, where the dictionary D can arise from two or several bases, frames, or even less structured systems, and sketches three applications: separating linear features from planar ones in 3D data, noncooperative multiuser encoding, and identification of over-complete independent component models.
Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions
AbstractIn this paper we develop a robust uncertainty principle for finite signals in ${\Bbb C}^N$ which states that, for nearly all choices $T, \Omega\subset\{0,\ldots,N-1\}$ such that $|T| +
Atomic Decomposition by Basis Pursuit
TLDR
Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Image Reconstruction With Ridgelets
When we capture or transmit a digital image of an object, we sometimes lose a fraction of the pixels to noise or error. It is therefore of interest to develop methods that can reconstruct the
...
...