Rate Distortion Characteristic Modeling for Neural Image Compression

@article{Jia2022RateDC,
  title={Rate Distortion Characteristic Modeling for Neural Image Compression},
  author={Chuanmin Jia and Ziqing Ge and Shanshe Wang and Siwei Ma and Wen Gao},
  journal={2022 Data Compression Conference (DCC)},
  year={2022},
  pages={202-211}
}
End-to-end optimized neural image compression (NIC) has obtained superior lossy compression performance recently. In this paper, we consider the problem of rate-distortion (R-D) characteristic analysis and modeling for NIC. We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep networks. Thus arbitrary bit-rate points could be elegantly realized by leveraging such model via a single trained network. We propose a plugin-in module to learn… 
2 Citations

Figures and Tables from this paper

Universal Efficient Variable-Rate Neural Image Compression

TLDR
Two universal modules named Energy-based Channel Gating and Bit-rate Modulator are proposed, which can be directly embedded into existing end-to-end image compression models, and can obtain ability to output arbitrary bit-rate with a single model and reduced computation.

FPX-NIC: An FPGA-Accelerated 4K Ultra-High-Definition Neural Video Coding System

TLDR
This paper presents FPX-NIC, an FPGA-accelerated NIC framework designed for hardware encoding, which consists of a novel NIC scheme and an energy-efficient neural network (NN) deployment method that is able to improve both processing speed and energy efficiency.

References

SHOWING 1-10 OF 41 REFERENCES

Learning Convolutional Networks for Content-Weighted Image Compression

TLDR
The bit rate of the different parts of the image is adapted to local content, and the content-aware bit rate is allocated under the guidance of a content-weighted importance map so that the sum of the importance map can serve as a continuous alternative of discrete entropy estimation to control compression rate.

Conditional Probability Models for Deep Image Compression

TLDR
This paper proposes a new technique to navigate the rate-distortion trade-off for an image compression auto-encoder by using a context model: A 3D-CNN which learns a conditional probability model of the latent distribution of the auto- Encoder.

Variable Rate Deep Image Compression With a Conditional Autoencoder

TLDR
The proposed scheme provides a better rate-distortion trade-off than the traditional variable-rate image compression codecs such as JPEG2000 and BPG and shows comparable and sometimes better performance than the state-of-the-art learned image compression models that deploy multiple networks trained for varying rates.

End-to-end Optimized Image Compression

TLDR
Across an independent set of test images, it is found that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods, and a dramatic improvement in visual quality is observed, supported by objective quality estimates using MS-SSIM.

Full Resolution Image Compression with Recurrent Neural Networks

TLDR
This is the first neural network architecture that is able to outperform JPEG at image compression across most bitrates on the rate-distortion curve on the Kodak dataset images, with and without the aid of entropy coding.

Learned Image Compression With Discretized Gaussian Mixture Likelihoods and Attention Modules

TLDR
This paper proposes to use discretized Gaussian Mixture Likelihoods to parameterize the distributions of latent codes, which can achieve a more accurate and flexible entropy model and achieves a state-of-the-art performance against existing learned compression methods.

Learning End-to-End Lossy Image Compression: A Benchmark

TLDR
By introducing a coarse-to-fine hyperprior model for entropy estimation and signal reconstruction, this paper achieves improved rate-distortion performance, especially on high-resolution images, and provides an opportunity to take a further step towards higher-efficiency image compression.

Deep Generative Models for Distribution-Preserving Lossy Compression

TLDR
This work proposes and studies the problem of distribution-preserving lossy compression to optimize the rate-distortion tradeoff under the constraint that the reconstructed samples follow the distribution of the training data, and recovers both ends of the spectrum.

End-to-End Optimized Versatile Image Compression With Wavelet-Like Transform

TLDR
iWave++ is proposed as a new end-to-end optimized image compression scheme, in which iWave, a trained wavelet-like transform, converts images into coefficients without any information loss, and a single model supports both lossless and lossy compression.

Variational image compression with a scale hyperprior

TLDR
It is demonstrated that this model leads to state-of-the-art image compression when measuring visual quality using the popular MS-SSIM index, and yields rate-distortion performance surpassing published ANN-based methods when evaluated using a more traditional metric based on squared error (PSNR).