JPEG XL next-generation image compression architecture and coding tools

@inproceedings{Alakuijala2019JPEGXN,
  title={JPEG XL next-generation image compression architecture and coding tools},
  author={Jyrki Alakuijala and Ruud van Asseldonk and Sami Boukortt and Martin Bruse and Iulia M. Comsa and Moritz Firsching and Thomas Fischbacher and Evgenii Kliuchnikov and Sebastian Gomez and Robert Obryk and Krzysztof Potempa and Alexander Rhatushnyak and Jon Sneyers and Zoltan Szabadka and Lode Vandervenne and Luca Versari and Jan Wassenberg},
  booktitle={Optical Engineering + Applications},
  year={2019}
}
An update on the JPEG XL standardization effort: JPEG XL is a practical approach focused on scalable web distribution and efficient compression of high-quality images. It will provide various benefits compared to existing image formats: significantly smaller size at equivalent subjective quality; fast, parallelizable decoding and encoding configurations; features such as progressive, lossless, animation, and reversible transcoding of existing JPEG; support for high-quality applications… Expand

Topics from this paper

Benchmarking JPEG XL image compression
TLDR
JPG XL was designed to benefit from multicore and SIMD, and actually decodes faster than JPEG, and the resulting speeds on ARM and x86 CPUs are reported. Expand
Comprehensive assessment of image compression algorithms
TLDR
This paper analyzes why all attempts to replace JPEG have been limited so far, and discusses additional features other than compression efficiency that need to be present in any modern image coding algorithm to increase its chances of success. Expand
Comparison of Lossless Image Formats
TLDR
It turned out that FLIF is currently the most efficient format for lossless image compression, in contrast to thatFLIF developers stopped its development in favor of JPEG XL. Expand
Multi-Mode Intra Prediction for Learning-Based Image Compression
TLDR
This paper uses Convolutional Neural Networks to develop a new intra-picture prediction mode that uses two CNN-based prediction modes and all intra modes previously implemented in the High Efficiency Video Coding (HEVC) standard. Expand
Security and Forensics Exploration of Learning-based Image Coding
Advances in media compression indicate significant potential to drive future media coding standards, e.g., Joint Photographic Experts Group’s learning-based image coding technologies (JPEG-AI) andExpand
Lossless Image Compression by Joint Prediction of Pixel and Context Using Duplex Neural Networks
TLDR
This paper presents a new lossless image compression method based on the learning of pixel values and contexts through multilayer perceptrons (MLPs), which performs better than the conventional non-learning algorithms and also recent learning-based compression methods with practical computation time. Expand
iFlow: Numerically Invertible Flows for Efficient Lossless Compression via a Uniform Coder
TLDR
iFlow, a new method for achieving efficient lossless compression using normalizing flows which achieves state-of-the-art compression ratios and is 5× quicker than other high-performance schemes, is introduced. Expand
Lossless Coding of Light Fields based on 4D Minimum Rate Predictors
Common representations of light fields use fourdimensional data structures, where a given pixel is closely related not only to its spatial neighbours within the same view, but also to its angularExpand
Zuckerli: A New Compressed Representation for Graphs
TLDR
It is shown that Zuckerli-compressed graphs are 10% to 29% smaller, and more than 20% in most cases, with a resource usage for decompression comparable to that of WebGraph. Expand
Lossless Compression with Latent Variable Models
TLDR
This work extends BB-ANS to hierarchical latent variable models, enabling state-of-the-art lossless compression of full-size colour images from the ImageNet dataset, and describes ‘Craystack’, a modular software framework which is developed for rapid prototyping of compression using deep generative models. Expand
...
1
2
...

References

SHOWING 1-10 OF 18 REFERENCES
Committee Draft of JPEG XL Image Coding System
TLDR
The JPEG XL architecture is traditional block-transform coding with upgrades to each component, providing various benefits compared to existing image formats. Expand
JPEG on STEROIDS: Common optimization techniques for JPEG image compression
  • T. Richter
  • Computer Science
  • 2016 IEEE International Conference on Image Processing (ICIP)
  • 2016
TLDR
A short review of the known technologies of JPEG and how they are evaluated on the basis of the JPEG XT demo implementation, which puts the compression gains into perspective of more modern compression formats such as JPEG 2000. Expand
DCT Coefficient Prediction for JPEG Image Coding
  • G. Lakhani
  • Computer Science
  • 2007 IEEE International Conference on Image Processing
  • 2007
TLDR
This note explores correlation between adjacent rows (or columns) at the block boundaries for predicting DCT coefficients of the first row/column of DCT blocks to reduce the average JPEG DC residual for images compressed at the default quality level. Expand
The JPEG still picture compression standard
TLDR
The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. Expand
Noise generation for compression algorithms
TLDR
A physically and biologically inspired technique that learns a noise model at the encoding step of the compression algorithm and then generates the appropriate amount of additive noise at the decoding step that can significantly increase the realism of the decompressed image at the cost of few bytes of additional memory space regardless of the original image size. Expand
A low multiplicative complexity fast recursive DCT-2 algorithm
TLDR
A fast Discrete Cosine Transform (DCT) algorithm is introduced that can be of particular interest in image processing and is based on the algebraic signal processing theory (ASP). Expand
The use of asymmetric numeral systems as an accurate replacement for Huffman coding
TLDR
The proposed ANS-based coding can be interpreted as an equivalent to adding fractional bits to a Huffman coder to combine the speed of HC and the accuracy offered by AC, and can be implemented with much less computational complexity. Expand
Arithmetic Coding
The earlier introduced arithmetic coding idea has been generalized to a very broad and flexible coding technique which includes virtually all known variable rate noiseless coding techniques asExpand
A non-local algorithm for image denoising
  • A. Buades, B. Coll, J. Morel
  • Mathematics, Computer Science
  • 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05)
  • 2005
TLDR
A new measure, the method noise, is proposed, to evaluate and compare the performance of digital image denoising methods, and a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image is proposed. Expand
A method for the construction of minimum-redundancy codes
  • D. Huffman
  • Computer Science
  • Proceedings of the IRE
  • 1952
TLDR
A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized. Expand
...
1
2
...