Just relax: convex programming methods for identifying sparse signals in noise

@article{Tropp2006JustRC,
  title={Just relax: convex programming methods for identifying sparse signals in noise},
  author={Joel A. Tropp},
  journal={IEEE Transactions on Information Theory},
  year={2006},
  volume={52},
  pages={1030-1051}
}
  • J. Tropp
  • Published 1 March 2006
  • Computer Science
  • IEEE Transactions on Information Theory
This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that has been contaminated with additive noise, the goal is to identify which elementary signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is… 
DISTRIBUTED ALGORITHMS FOR SPARSE RECONSTRUCTION
TLDR
This paper proposes and analyzes some distributed algorithms that solve a sparse convex approximation problem known as the “basis pursuit” problem and extends the well-known results of convergence of the Diagonal Quadratic Approximation and the Nonlinear Gauss-Seidel methods to cover a new class of non-differentiable functions, which the authors call “rigid functions”.
Estimation of sparse distributions
TLDR
An efficient sparse signal recovery algorithm is developed and in most settings the extended algorithm outperforms other conventional algorithms with a large margin, and the applicability of the proposed algorithms for solving some practical problems is explored.
Algorithms for sparse and low-rank optimization: convergence, complexity and applications
TLDR
Efficient algorithms for solving sparse and low-rank optimization problems and the convergence and iteration complexity properties of these algorithms are given and heuristics for determining the rank of the matrix when its true rank is not known are proposed.
Analog Sparse Approximation with Applications to Compressed Sensing
TLDR
In the simulated task of recovering synthetic and MRI data acquired via compressive sensing techniques, continuous time dynamical systems for solving sparse approximation problems if they were implemented in analog VLSI can potentially perform recovery at time scales of 10-20{\mu}s, supporting datarates of 50-100 kHz.
Explorer Iterative Thresholding for Sparse Approximations
TLDR
This paper studies two iterative algorithms that are minimising the cost functions of interest and adapts the algorithms, showing on one example that this adaptation can be used to achieve results that lie between those obtained with Matching Pursuit and those found with Orthogonal Matching pursuit, while retaining the computational complexity of the Matching pursued algorithm.
Dynamic Updating for � � Minimization
TLDR
A suite of dynamic algorithms for solving minimization programs for streaming sets of measurements and dynamic updating schemes for decoding problems, where an arbitrary signal is to be recovered from redundant coded measurements which have been corrupted by sparse errors are presented.
Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems
TLDR
This paper proposes gradient projection algorithms for the bound-constrained quadratic programming (BCQP) formulation of these problems and test variants of this approach that select the line search parameters in different ways, including techniques based on the Barzilai-Borwein method.
Iterative Thresholding for Sparse Approximations
TLDR
This paper studies two iterative algorithms that are minimising the cost functions of interest and adapts the algorithms and shows on one example that this adaptation can be used to achieve results that lie between those obtained with Matching Pursuit and those found with Orthogonal Matching pursuit, while retaining the computational complexity of the Matching pursuit algorithm.
Dynamic Updating for ` 1 Minimization
The theory of compressive sensing (CS) suggests that under certain conditions, a sparse signal can be recovered from a small number of linear incoherent measurements. An effective class of
Sparse signal recovery using sparse random projections
TLDR
A fast algorithm for approximation of compressible signals based on sparse random projections, where the signal is assumed to be well-approximated by a sparse vector in an orthonormal transform and a general framework for deriving information-theoretic lower bounds for sparse recovery is developed.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 99 REFERENCES
JUST RELAX: CONVEX PROGRAMMING METHODS FOR SUBSET SELECTION AND SPARSE APPROXIMATION
TLDR
It is demonstrated that the solution of the convex program frequently coincides with the solutionof the original approximation problem, and comparable new results for a greedy algorithm, Orthogonal Matching Pursuit, are stated.
On the stability of the basis pursuit in the presence of noise
Stable recovery of sparse overcomplete representations in the presence of noise
TLDR
This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system and shows that similar stability is also available using the basis and the matching pursuit algorithms.
Recovery of exact sparse representations in the presence of bounded noise
  • J. Fuchs
  • Computer Science
    IEEE Transactions on Information Theory
  • 2005
TLDR
The purpose of this contribution is to extend some recent results on sparse representations of signals in redundant bases developed in the noise-free case to the case of noisy observations, finding a bound on the number of nonzero entries in xo.
Greed is good: algorithmic results for sparse approximation
  • J. Tropp
  • Computer Science
    IEEE Transactions on Information Theory
  • 2004
TLDR
This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries and develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal.
Sparse solutions to linear inverse problems with multiple measurement vectors
TLDR
This work considers in depth the extension of two classes of algorithms-Matching Pursuit and FOCal Underdetermined System Solver-to the multiple measurement case so that they may be used in applications such as neuromagnetic imaging, where multiple measurement vectors are available, and solutions with a common sparsity structure must be computed.
Maximal Sparsity Representation via l 1 Minimization
TLDR
This paper extends previous results and proves a similar relationship for the most general dictionary D and shows that previous results are emerging as special cases of the new extended theory.
...
1
2
3
4
5
...