JPEG XL next-generation image compression architecture and coding tools

@inproceedings{Alakuijala2019JPEGXN,
  title={JPEG XL next-generation image compression architecture and coding tools},
  author={Jyrki Alakuijala and Ruud van Asseldonk and Sami Boukortt and Martin Bruse and Iulia M. Comsa and Moritz Firsching and Thomas Fischbacher and Evgenii Kliuchnikov and Sebastian Gomez and Robert Obryk and Krzysztof Potempa and Alexander Rhatushnyak and Jon Sneyers and Zoltan Szabadka and Lode Vandervenne and Luca Versari and Jan Wassenberg},
  booktitle={Optical Engineering + Applications},
  year={2019}
}
An update on the JPEG XL standardization effort: JPEG XL is a practical approach focused on scalable web distribution and efficient compression of high-quality images. It will provide various benefits compared to existing image formats: significantly smaller size at equivalent subjective quality; fast, parallelizable decoding and encoding configurations; features such as progressive, lossless, animation, and reversible transcoding of existing JPEG; support for high-quality applications… 

Benchmarking JPEG XL image compression

JPG XL was designed to benefit from multicore and SIMD, and actually decodes faster than JPEG, and the resulting speeds on ARM and x86 CPUs are reported.

Practical Learned Lossless JPEG Recompression with Multi-Level Cross-Channel Entropy Model in the DCT Domain

  • Lina GuoXinjie Shi Yan Wang
  • Computer Science
    2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2022
This work proposes a deep learning based JPEG recompression method that operates on DCT domain and proposes a Multi-Level Cross-Channel Entropy Model to compress the most informative Y component to achieve state-of-the-art performance.

Picture Quality of 360° Images Compressed by Emerging Compression Algorithms

This paper provides an objective-based comparison of emerging compression algorithms (JPEG XL, HEIC, AVIF), implemented in a graphic viewer and editor, for 360° images, showing high efficiency and applicability for omnidirectional images.

Comprehensive assessment of image compression algorithms

This paper analyzes why all attempts to replace JPEG have been limited so far, and discusses additional features other than compression efficiency that need to be present in any modern image coding algorithm to increase its chances of success.

Comparison of Lossless Image Formats

It turned out that FLIF is currently the most efficient format for lossless image compression, in contrast to thatFLIF developers stopped its development in favor of JPEG XL.

Evolution of AVIF Encoder: Speed and Memory Optimizations

An overview of speed optimizations that were contributed to the libaom encoder is provided and methods to reduce the complexity of prediction mode and residual-transform search are described.

A Comparative Study on Lossless compression mode in WebP, Better Portable Graphics (BPG), and JPEG XL Image Compression Algorithms

The results indicate that JPEG XL has the best CR on average compared to the other two algorithms when images with 8 bits per channel are used, and unlike the BPG and WebP, the JPEG XL offered real compression when HDR images, i.e., with 16 bit per channel, are feed for compression while the other algorithms didn't support such bit depth and generated images with8 bits per channels.

Learned Lossless JPEG Transcoding via Joint Lossy and Residual Compression

This work proposes the learned lossless JPEG transcoding framework via Joint Lossy and Residual Compression, and is the first to utilize the learned end-to-end lossy transform coding to reduce the redundancy of DCT coefficients in a compact representational domain.

Exploiting context dependence for image compression with upsampling

This article discusses simple inexpensive general techniques for image compression with upsampling, which allowed to save on average $0.645 bits/difference for the last upscaling for 48 standard grayscale 8 bit images - compared to assumption of fixed Laplace distribution.

Evaluating the Practicality of Learned Image Compression

Neural architecture search (NAS) is introduced to designing more efficient networks with lower latency, and quantization is leveraged to accelerate the inference process.

References

SHOWING 1-10 OF 17 REFERENCES

Committee Draft of JPEG XL Image Coding System

The JPEG XL architecture is traditional block-transform coding with upgrades to each component, providing various benefits compared to existing image formats.

JPEG on STEROIDS: Common optimization techniques for JPEG image compression

  • T. Richter
  • Computer Science
    2016 IEEE International Conference on Image Processing (ICIP)
  • 2016
A short review of the known technologies of JPEG and how they are evaluated on the basis of the JPEG XT demo implementation, which puts the compression gains into perspective of more modern compression formats such as JPEG 2000.

DCT Coefficient Prediction for JPEG Image Coding

  • G. Lakhani
  • Computer Science
    2007 IEEE International Conference on Image Processing
  • 2007
This note explores correlation between adjacent rows (or columns) at the block boundaries for predicting DCT coefficients of the first row/column of DCT blocks to reduce the average JPEG DC residual for images compressed at the default quality level.

The JPEG still picture compression standard

The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications.

Noise generation for compression algorithms

A physically and biologically inspired technique that learns a noise model at the encoding step of the compression algorithm and then generates the appropriate amount of additive noise at the decoding step that can significantly increase the realism of the decompressed image at the cost of few bytes of additional memory space regardless of the original image size.

A low multiplicative complexity fast recursive DCT-2 algorithm

A fast Discrete Cosine Transform (DCT) algorithm is introduced that can be of particular interest in image processing and is based on the algebraic signal processing theory (ASP).

The use of asymmetric numeral systems as an accurate replacement for Huffman coding

The proposed ANS-based coding can be interpreted as an equivalent to adding fractional bits to a Huffman coder to combine the speed of HC and the accuracy offered by AC, and can be implemented with much less computational complexity.

Arithmetic Coding

The earlier introduced arithmetic coding idea has been generalized to a very broad and flexible coding technique which includes virtually all known variable rate noiseless coding techniques as

A non-local algorithm for image denoising

  • A. BuadesB. CollJ. Morel
  • Computer Science
    2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05)
  • 2005
A new measure, the method noise, is proposed, to evaluate and compare the performance of digital image denoising methods, and a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image is proposed.

A method for the construction of minimum-redundancy codes

  • D. Huffman
  • Computer Science, Business
    Proceedings of the IRE
  • 1952
A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.