# Decoding by linear programming

@article{Cands2005DecodingBL, title={Decoding by linear programming}, author={Emmanuel J. Cand{\`e}s and Terence Tao}, journal={IEEE Transactions on Information Theory}, year={2005}, volume={51}, pages={4203-4215} }

This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector f/spl isin/R/sup n/ from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the /spl lscr//sub 1/-minimization problem (/spl par/x/spl par//sub /spl lscr…

## 6,666 Citations

Encoding the /spl lscr//sub p/ ball from limited measurements

- Computer ScienceData Compression Conference (DCC'06)
- 2006

A strategy for encoding elements of the /spl lscr//sub p/ ball which is universal in that the encoding procedure is completely generic, and does not depend on p (the sparsity of the signal), and it achieves near-optimal minimax performance simultaneously for all p < 1.

The limits of error correction with lp decoding

- Computer Science, Mathematics2010 IEEE International Symposium on Information Theory
- 2010

This work investigates the relationship of the fraction of errors and the recovering ability of l<inf>p</inf>-minimization (0 < p ≤ 1) which returns a vector x that minimizes the “l<inf>, p-norm” of y-Ax.

Highly Robust Error Correction byConvex Programming

- Computer ScienceIEEE Transactions on Information Theory
- 2008

This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal to a remote receiver and shows that if one encodes the information as where is a suitable coding matrix, there are two decoding schemes that allow the recovery of the block of pieces of information with nearly the same accuracy.

Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

- Computer ScienceIEEE Transactions on Information Theory
- 2006

If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.

Phase Transitions in Error Correcting and Compressed Sensing by ℓ1 Linear Programming

- Computer ScienceInt. J. Wavelets Multiresolution Inf. Process.
- 2013

It is numerically observed that the breakdown points of 50% successes in recovering the input vector x ∈ ℝn from the corrupted oversampled measurement y lie on the Donoho–Tanner curves when reflected in their midpoint.

Compressed Sensing

- MathematicsComputer Vision, A Reference Guide
- 2014

It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.

Equivalent mean breakdown points for linear codes and compressed sensing by ℓ1 optimization

- Computer Science2010 10th International Symposium on Communications and Information Technologies
- 2010

To have equivalently high mean breakdown points by ℓ<inf>1</inf> linear programming, the authors use uniformly distributed random matrices A ∈ ℝ<sup>(m−n)×m</sup> and matrices B ∈ℝ <sup>m×n</Sup> with orthonormal columns spanning the null space of A.

Sparse recovery via convex optimization

- Computer Science
- 2009

The method of l_1 analysis is introduced and it is shown that it is guaranteed to give good recovery of a signal from a few measurements, when the signal can be well represented in a dictionary.

Geometric approach to error-correcting codes and reconstruction of signals

- Computer Science
- 2005

An approach to error-correcting codes from the viewpoint of geometric functional analysis (asymptotic convex geometry) is developed, which belongs to a common ground of coding theory, signal processing, combinatorial geometry, and geometricfunctional analysis.

Sublinear-Time Sparse Recovery, and Its Power in the Design of Exact Algorithms

- Computer Science
- 2019

Several new contributions to the field of sparse recovery are described, as well as how sparse recovery techniques can be of great significance in the design of exact algorithms, outside of the scope of the problems they first were created for.

## References

SHOWING 1-10 OF 53 REFERENCES

Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information

- Computer ScienceIEEE Transactions on Information Theory
- 2006

It is shown how one can reconstruct a piecewise constant object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.

Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

- Computer ScienceIEEE Transactions on Information Theory
- 2006

If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.

Using linear programming to Decode Binary linear codes

- Computer ScienceIEEE Transactions on Information Theory
- 2005

The definition of a pseudocodeword unifies other such notions known for iterative algorithms, including "stopping sets," "irreducible closed walks," "trellis cycles," "deviation sets," and "graph covers," which is a lower bound on the classical distance.

Uncertainty principles and ideal atomic decomposition

- Computer ScienceIEEE Trans. Inf. Theory
- 2001

It is proved that if S is representable as a highly sparse superposition of atoms from this time-frequency dictionary, then there is only one such highly sparse representation of S, and it can be obtained by solving the convex optimization problem of minimizing the l/sup 1/ norm of the coefficients among all decompositions.

Decoding error-correcting codes via linear programming

- Computer Science
- 2003

This thesis investigates the application of linear programming (LP) relaxation to the problem of decoding an error-correcting code, and provides specific LP decoders for two major families of codes: turbo codes and low-density parity-check codes.

LP Decoding Corrects a Constant Fraction of Errors

- Computer ScienceIEEE Transactions on Information Theory
- 2004

We show that for low-density parity-check (LDPC) codes whose Tanner graphs have sufficient expansion, the linear programming (LP) decoder of Feldman, Karger, and Wainwright can correct a constant…

LP decoding achieves capacity

- Computer ScienceSODA '05
- 2005

This work gives a linear programming (LP) decoder that achieves the capacity (optimal rate) of a wide range of probabilistic binary communication channels and provides a new combinatorial characterization of error events that is of independent interest, and which is expected to lead to further improvements.

The Smallest Eigenvalue of a Large Dimensional Wishart Matrix

- Mathematics
- 1985

For each s : 1,2... let n: r?(s) be a posit ive integer such that n/s --+ y > 0 as s --+ cc. Let V" be an n X s matrix whose entries are i.i.d. N(0, 1) random variables and let M": (I/s)V"V:.The…

Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization

- Computer ScienceProceedings of the National Academy of Sciences of the United States of America
- 2003

This article obtains parallel results in a more general setting, where the dictionary D can arise from two or several bases, frames, or even less structured systems, and sketches three applications: separating linear features from planar ones in 3D data, noncooperative multiuser encoding, and identification of over-complete independent component models.