Generalized Kraft Inequality and Arithmetic Coding

@article{Rissanen1976GeneralizedKI,
  title={Generalized Kraft Inequality and Arithmetic Coding},
  author={J. Rissanen},
  journal={IBM J. Res. Dev.},
  year={1976},
  volume={20},
  pages={198-203}
}
  • J. Rissanen
  • Published 1976
  • Mathematics, Computer Science
  • IBM J. Res. Dev.
Algorithms for encoding and decoding finite strings over a finite alphabet are described. The coding operations are arithmetic involving rational numbers li as parameters such that Σi2-l i≤2-∈. This coding technique requires no blocking, and the per-symbol length of the encoded string approaches the associated entropy within ∈. The coding speed is comparable to that of conventional coding methods. 
Arithmetic coding into fixed-length codewords
TLDR
The idea here is to apply arithmetic coding piecewise, by cutting the process regularly, and the result consists of fixed-length sequences of bits, representing variable-length substrings of the source. Expand
Fast and Space-Efficient Adaptive Arithmetic Coding
TLDR
An implementation of the method is suggested whose coding time is less in order of magnitude than that for known methods by using a data structure called “imaginary sliding window”, which allows to significantly reduce the memory size of the encoder and decoder. Expand
A multiplication-free multialphabet arithmetic code
TLDR
A recursion for arithmetic codes used for data compression is described which requires no multiplication or division, even in the case of nonbinary alphabets, and is applicable in conjunction with stationary and nonstationary models alike. Expand
Arithmetic stream coding using fixed precision registers
  • F. Rubin
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
  • 1979
TLDR
Algorithms are presented for encoding and decoding strings of characters as real binary fractions, using registers of fixed precision, and have storage requirements and computation time O(n \log_{2}N) for string length n and alphabet size N. Expand
Arithmetic Coding
The earlier introduced arithmetic coding idea has been generalized to a very broad and flexible coding technique which includes virtually all known variable rate noiseless coding techniques asExpand
Efficien Decoding of Lexicographical Rank in Binary Combinatorial Coding
  • Can Özbey
  • 2020 5th International Conference on Computer Science and Engineering (UBMK)
  • 2020
In this paper, we present a method that reduces the decoding complexity of an entropy encoding technique, namely, combinatorial coding, in order to increase compression efficiency without sufferingExpand
Efficien Decoding of Lexicographical Rank in Binary Combinatorial Coding
TLDR
A method that reduces the decoding complexity of an entropy encoding technique, namely, combinatorial coding, in order to increase compression efficiency without suffering from intolerable decoding latency is presented. Expand
Introduction to Arithmetic Coding - Theory and Practice
entropy coding, compression, complexity This introduction to arithmetic coding is divided in two parts. The first explains how and why arithmetic coding works. We start presenting it in very generalExpand
Analysis of arithmetic coding for data compression
  • P. Howard, J. Vitter
  • Mathematics, Computer Science
  • [1991] Proceedings. Data Compression Conference
  • 1991
TLDR
The authors analyze the amount of compression possible when arithmetic coding is used for text compression in conjunction with various input models and finds that adaptive codes are proven to be as good as decrementing semi-adaptive codes. Expand
Chapter 4 – Arithmetic Coding
TLDR
The basic ideas behind arithmetic coding are looked at, some of the properties of arithmetic codes are studied, and an implementation of the method is described. Expand
...
1
2
3
4
5
...

References

SHOWING 1-7 OF 7 REFERENCES
A method for the construction of minimum-redundancy codes
  • D. Huffman
  • Computer Science
  • Proceedings of the IRE
  • 1952
TLDR
A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized. Expand
Universal codeword sets and representations of the integers
  • P. Elias
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
  • 1975
TLDR
An application is the construction of a uniformly universal sequence of codes for countable memoryless sources, in which the n th code has a ratio of average codeword length to source rate bounded by a function of n for all sources with positive rate. Expand
An algorithm for source coding
  • J. Schalkwijk
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
  • 1972
TLDR
This work derives a simple algorithm for the ranking of binary sequences of length n and weight w and uses it for source encoding a memoryless binary source that generates O's and l's with probability p = 1 - q. Expand
Enumerative source encoding
  • T. Cover
  • Computer Science
  • IEEE Trans. Inf. Theory
  • 1973
TLDR
This work provides an explicit scheme for calculating the index of any sequence in S according to its position in the lexicographic ordering of S, thus resulting in a data compression of (log\midS\mid)/n. Expand
On the Number of Bits Required to Implement an Associative Memory
  • Computer Structures Group
  • 1972
Information Theory and Coding, McGrawHill Book Co., Inc
  • New York,
  • 1963
The author is located ut the IBM Research Laborutory
  • The author is located ut the IBM Research Laborutory