# Simple Worst-Case Optimal Adaptive Prefix-Free Coding

@article{Gagie2022SimpleWO,
title={Simple Worst-Case Optimal Adaptive Prefix-Free Coding},
author={Travis Gagie},
journal={2022 Data Compression Conference (DCC)},
year={2022},
pages={453-453}
}
• T. Gagie
• Published 7 September 2021
• Computer Science
• 2022 Data Compression Conference (DCC)
Suppose we want to store a string <tex>$S[1..n]$</tex> over an alphabet of size <tex>$\sigma$</tex> using adaptive prefix-free coding with fast encoding and decoding. If we are not too concerned about compression, we can process <tex>$/\mathrm{S}$</tex> in blocks of <tex>$\sigma$</tex> characters as follows: we encode or decode the first block with a Shannon code for the uniform distribution; to encode or decode the <tex>$ith$</tex> block, for <tex>$i > 1$</tex>, we build a Shannon code for the…

## References

SHOWING 1-10 OF 15 REFERENCES
• Computer Science
• 2009
This paper presents the first algorithm for adaptive prefix coding that encodes and decodes each character in optimal worst-case time while producing an encoding whose length is also worst- case optimal.
Variations on a theme by Huffman
Four new results about Huffman codes are presented and a simple algorithm for adapting a Huffman code to slowly varying esthnates of the source probabilities is presented.
Bounding the compression loss of the FGK algorithm
• Computer Science
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)
• 1999
An amortized analysis is presented to prove that the good performance of FGK is explained by the fact that the total number of bits D/sub t/ transmitted by the FGK algorithm for a message with t symbols is bounded below by S/ Sub t/-n+1, where S/ sub t/ is the number ofbits required by the static Huffman method.
Design and analysis of dynamic Huffman codes
A new one-pass algorithm for constructing dynamic Huffman codes is introduced and analyzed, and it is shown that the number of bits used by the new algorithm to encode a message containing t letters is < t bits more than that use by the conventional two-pass Huffman scheme, independent of the alphabet size.
Dynamic Trees with Almost-Optimal Access Cost
• Computer Science
ESA
• 2018
This paper shows how to maintain an almost optimal weighted binary search tree under access operations and insertions of new elements where the approximation is an additive constant.
A Fast Algorithm for Adaptive Prefix Coding
• Computer Science
2006 IEEE International Symposium on Information Theory
• 2006
This is the first algorithm that adaptively encodes a text in O(m) time and achieves an almost optimal bound on the encoding length in the worst case.
Lower Bounds on the Redundancy of Huffman Codes With Known and Unknown Probabilities
• Computer Science
IEEE Access
• 2019
A method to obtain tight lower bounds on the minimum redundancy achievable by a Huffman code when the probability distribution underlying an alphabet is only partially known and the occurrence probabilities for some of the symbols in an alphabet are unknown.
Variable-length binary encodings
• Computer Science
• 1959
This paper gives a theoretical treatment of several properties which describe certain variable-length binary encodings of the sort which could be used for the storage or transmission of information.
Dynamic Shannon coding
• T. Gagie
• Computer Science
Data Compression Conference, 2004. Proceedings. DCC 2004
• 2004
A Mathematical Theory of Communication
• Computer Science
• 2006
It is proved that the authors can get some positive data rate that has the same small error probability and also there is an upper bound of the data rate, which means they cannot achieve the data rates with any encoding scheme that has small enough error probability over the upper bound.