• Corpus ID: 245704309

Understanding Entropy Coding With Asymmetric Numeral Systems (ANS): a Statistician's Perspective

@article{Bamler2022UnderstandingEC,
  title={Understanding Entropy Coding With Asymmetric Numeral Systems (ANS): a Statistician's Perspective},
  author={Robert Bamler},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.01741}
}
Entropy coding is the backbone data compression. Novel machine-learning based compression methods often use a new entropy coder called Asymmetric Numeral Systems (ANS) [Duda et al., 2015], which provides very close to optimal bitrates and simplifies [Townsend et al., 2019] advanced compression techniques such as bits-back coding. However, researchers with a background in machine learning often struggle to understand how ANS works, which prevents them from exploiting its full versatility. This… 

Figures and Tables from this paper

An Introduction to Neural Data Compression

TLDR
This introduction hopes to fill in the necessary background by reviewing basic coding topics such as entropy coding and rate-distortion theory, related machine learning ideas such as bits-back coding and perceptual metrics, and providing a guide through the representative works in the literature so far.

References

SHOWING 1-10 OF 29 REFERENCES

The use of asymmetric numeral systems as an accurate replacement for Huffman coding

TLDR
The proposed ANS-based coding can be interpreted as an equivalent to adding fractional bits to a Huffman coder to combine the speed of HC and the accuracy offered by AC, and can be implemented with much less computational complexity.

Bit-Swap: Recursive Bits-Back Coding for Lossless Compression with Hierarchical Latent Variables

TLDR
Bit-Swap is proposed, a new compression scheme that generalizes BB-ANS and achieves strictly better compression rates for hierarchical latent variable models with Markov chain structure and results in lossless compression rates that are empirically superior to existing techniques.

Practical Lossless Compression with Latent Variables using Bits Back Coding

TLDR
Bits Back with ANS (BB-ANS) is presented, a scheme to perform lossless compression with latent variable models at a near optimal rate and it is concluded that with a sufficiently high quality generative model this scheme could be used to achieve substantial improvements in compression rate with acceptable running time.

Full Resolution Image Compression with Recurrent Neural Networks

TLDR
This is the first neural network architecture that is able to outperform JPEG at image compression across most bitrates on the rate-distortion curve on the Kodak dataset images, with and without the aid of entropy coding.

Insights from Generative Modeling for Neural Video Compression

TLDR
This work presents recent neural video codecs as instances of a generalized stochastic temporal autoregressive transform, and proposes several architectures that yield state-of-the-art video compression performance on full-resolution video and discusses their tradeoffs and ablations.

Joint Autoregressive and Hierarchical Priors for Learned Image Compression

TLDR
It is found that in terms of compression performance, autoregressive and hierarchical priors are complementary and can be combined to exploit the probabilistic structure in the latents better than all previous learned models.

Variational image compression with a scale hyperprior

TLDR
It is demonstrated that this model leads to state-of-the-art image compression when measuring visual quality using the popular MS-SSIM index, and yields rate-distortion performance surpassing published ANN-based methods when evaluated using a more traditional metric based on squared error (PSNR).

Improving Inference for Neural Image Compression

TLDR
This work identifies three approximation gaps which limit performance in the conventional approach to compression and proposes improvements to each based on ideas related to iterative inference, stochastic annealing for discrete optimization, and bits-back coding, resulting in the first application of bits- back coding to lossy compression.

* Range encoding: an algorithm for removing redundancy from a digitised message

TLDR
Range encoding is an algorithm for removing both sorts of redundancy in a message by encoding and decoding a message composed of letters drawn from the alphabet, and forming an encoded string of digits to base ten.

End-to-end Optimized Image Compression

TLDR
Across an independent set of test images, it is found that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods, and a dramatic improvement in visual quality is observed, supported by objective quality estimates using MS-SSIM.