Good Error-Correcting Codes Based on Very Sparse Matrices

@article{Mackay1999GoodEC,
  title={Good Error-Correcting Codes Based on Very Sparse Matrices},
  author={D. J. C. Mackay},
  journal={IEEE Trans. Inf. Theory},
  year={1999},
  volume={45},
  pages={399-431}
}
  • D. Mackay
  • Published 29 June 1997
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
We study two families of error-correcting codes defined in terms of very sparse matrices. "MN" (MacKay-Neal (1995)) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The decoding of both codes can be tackled with a practical sum-product algorithm. We prove that these codes are "very good", in that sequences of codes exist which, when optimally decoded, achieve information rates up… Expand
Graph-based codes and iterative decoding
TLDR
A new technique, called the typical set bound, is developed, for analyzing the asymptotic performance of code ensembles based on their weight enumerators, and is powerful enough to reproduce Shannon's noisy coding theorem for the class of binary-input symmetric channels. Expand
ASYMPTOTIC AND FINITE-LENGTH OPTIMIZATION OF LDPC CODES
This thesis addresses the problem of information transmission over noisy channels. In 1993, Berrou, Glavieux and Thitimajshima discovered Turbo-codes. These codes made it possible to achieve a veryExpand
Gallager Codes – Recent Results
TLDR
This paper reviews low-density parity-check codes, repeat-accumulate codes, and turbo codes, emphasising recent advances and describing the empirical power-laws obeyed by decoding times of sparse graph codes. Expand
Reed-Muller Codes Achieve Capacity on Erasure Channels
TLDR
This work shows that symmetry alone implies near-optimal performance in any sequence of linear codes where the blocklengths are strictly increasing, the code rates converge, and the permutation group of each code is doubly transitive. Expand
Reed–Muller Codes Achieve Capacity on Erasure Channels
TLDR
This work shows that symmetry alone implies near-optimal performance in any sequence of linear codes where the blocklengths are strictly increasing, the code rates converge, and the permutation group of each code is doubly transitive. Expand
Design and performance of turbo Gallager codes
  • G. Colavolpe
  • Computer Science
  • IEEE Transactions on Communications
  • 2004
TLDR
With properly chosen component convolutional codes, a turbo code can be successfully decoded by means of the decoding algorithm used for LDPC codes, i.e., the belief-propagation algorithm working on the code Tanner graph. Expand
Product accumulate codes: a class of codes with near-capacity performance and low decoding complexity
TLDR
This work proposes PA codes as a class of prospective codes with good performance, low decoding complexity, regular structure, and flexible rate adaptivity for all rates above 1/2, and shows that these codes provide performance similar to turbo codes but with significantly less decoding complexity and with a lower error floor. Expand
On the weight spectrum of good linear binary codes
TLDR
It is shown that a sequence of codes is good when transmitted over a memoryless binary-symmetric channel (BSC) or an additive white Gaussian noise (AWGN) channel if and only if the slope of its spectrum is finite everywhere and its minimum Hamming distance goes to infinity with no requirement on its rate growth. Expand
Gallager codes for CDMA applications: generalizations, constructions and performance
TLDR
Simulation results show that, compared with short frame length turbo codes, Gallager codes achieve a 0.5 dB improvement in AWGN channels and over 2dB improvement in fading channels in the signal-to-noise ratio required to achieve frame error rates of 10/sup -3/. Expand
New class of turbo-like codes with universally good performance and high-speed decoding
Modern turbo-like codes (TLCs), including concatenated convolutional codes and low density parity check (LDPC) codes, have been shown to approach the Shannon limit on the additive white GaussianExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 84 REFERENCES
Good error-correcting codes based on very sparse matrices
  • D. Mackay
  • Computer Science
  • Proceedings of IEEE International Symposium on Information Theory
  • 1997
TLDR
It can be proved that, given an optimal decoder, Gallager's low density parity check codes asymptotically approach the Shannon limit. Expand
Good codes can be produced by a few permutations
TLDR
It is shown that good codes, even those meeting the random coding bound, can be produced with relatively few (linear in the block length) permutations from a single codeword, which explains why good codes of a Iow complexity (such as those produced by) are hard to miss if selected at random. Expand
Theory of Error-correcting Codes
The field of channel coding started with Claude Shannon's 1948 landmark paper. Fifty years of efforts and invention have finally produced coding schemes that closely approach Shannon's channelExpand
On the Design of Turbo Codes
In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of theExpand
A recursive approach to low complexity codes
  • R. M. Tanner
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
  • 1981
TLDR
It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities. Expand
Codes and Decoding on General Graphs
TLDR
It is showed that many iterative decoding algorithms are special cases of two generic algorithms, the min-sum and sum-product algorithms, which also include non-iterative algorithms such as Viterbi decoding. Expand
Codes and iterative decoding on general graphs
TLDR
A general framework, based on ideas of Tanner, for the description of codes and iterative decoding (“turbo coding”) is developed, which clarifies, in particular, which a priori probabilities are admissible and how they are properly dealt with. Expand
Error-correcting capabilities of concatenated codes with MDS outer codes on memoryless channels with maximum- likelihood decoding
  • C. Thommesen
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
  • 1987
TLDR
It is proved that, asymptotically, the Gallager random coding theorem can be obtained for all rates by linear concatenated codes, and the expurgated coding theorem, as well, is proved to be valid for all rate on regular channels. Expand
We Can Think of Good Codes, and Even Decode Them
TLDR
It is shown that combining a maximum distance separable code (e.g., a Reed-Solomon one) with an almost arbitrary one-to-one mapping of its q-ary symbols into a 2-dimensional constellation is a satisfactory solution provided q is large enough. Expand
Low-density parity check codes over GF(q)
TLDR
A significant improvement over the performance of the binary codes is found, including a rate 1/4 code with bit error probability <10/sup -5/ at E/sub b//N/sub 0/=0.2 dB. Expand
...
1
2
3
4
5
...