# Universal Efficient Variable-Rate Neural Image Compression

@article{Yin2021UniversalEV,
title={Universal Efficient Variable-Rate Neural Image Compression},
author={Shan Yin and Chao Li and Youneng Bao and Yongshang Liang},
journal={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
year={2021},
pages={2025-2029}
}
• Published 18 November 2021
• Computer Science
• ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Recently, Learning-based image compression has reached comparable performance with traditional image codecs(such as JPEG, BPG, WebP). However, computational complexity and rate flexibility are still two major challenges for its practical deployment. To tackle these problems, this paper proposes two universal modules named Energy-based Channel Gating(ECG) and Bit-rate Modulator(BM), which can be directly embedded into existing end-to-end image compression models. ECG uses dynamic pruning to…
2 Citations

## Figures and Tables from this paper

• Computer Science
2022 IEEE International Conference on Image Processing (ICIP)
• 2022
This paper proposes a simple plug-in adaptive binary channel masking (ABCM) to judge the importance of each convolution channel and introduce sparsity during training and results show that up to 7 × computation reduction and 3 × acceleration can be achieved with negligible performance drop.
• Computer Science
Frontiers in Signal Processing
• 2022
This paper presents a learning-based image compression framework where image denoising and compression are performed jointly, and reveals considerable bitrate savings compared to a cascade combination of a state-of-the-art codec and a state of theart denoiser.

## References

SHOWING 1-10 OF 19 REFERENCES

• Computer Science
2022 Data Compression Conference (DCC)
• 2022
This paper proposes a plugin-in module to learn the relationship between the target bit-rate and the binary representation for the latent variable of auto-encoder, and model the rate and distortion characteristic of NIC as a function of the coding parameter $\lambda$ respectively.
• Computer Science
ArXiv
• 2021
A new deep image compression framework called Complexity and Bitrate Adaptive Network (CBANet), which aims to learn one single network to support variable bitrate coding under different computational complexity constraints, and proposes a new multi-branch complexity adaptive module.
• Computer Science
IEEE Transactions on Image Processing
• 2021
An end-to-end learnt lossy image compression approach, which is built on top of the deep nerual network (DNN)-based variational auto-encoder (VAE) structure with Non-Local Attention optimization and Improved Context modeling (NLAIC).
• Computer Science
ArXiv
• 2020
CompressAI is presented, a platform that provides custom operations, layers, models and tools to research, develop and evaluate end-to-end image and video compression codecs and is intended to be soon extended to the video compression domain.
• Computer Science
IEEE Transactions on Pattern Analysis and Machine Intelligence
• 2022
iWave++ is proposed as a new end-to-end optimized image compression scheme, in which iWave, a trained wavelet-like transform, converts images into coefficients without any information loss, and a single model supports both lossless and lossy compression.
• Computer Science
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2021
A continuously rate adjustable learned image compression framework, Asymmetric Gained Variational Autoencoder (AG-VAE), which utilizes a pair of gain units to achieve discrete rate adaptation in one single model with a negligible additional computation and the asymmetric Gaussian entropy model for more accurate entropy estimation.
• Computer Science
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2020
This paper proposes to use discretized Gaussian Mixture Likelihoods to parameterize the distributions of latent codes, which can achieve a more accurate and flexible entropy model and achieves a state-of-the-art performance against existing learned compression methods.
• Computer Science
ArXiv
• 2019
This work applies automatic network optimization techniques to reduce the computational complexity of a popular architecture used in neural image compression, analyzes the decoder complexity in execution runtime and explores the trade-offs between two distortion metrics, rate-distortion performance and run-time performance to design and research more computationally efficient Neural image compression.
• Computer Science
IEEE Signal Processing Letters
• 2020
Modulated autoencoders (MAEs) are proposed, where the representations of a shared autoencoder are adapted to the specific R-D tradeoff via a modulation network, and can achieve almost the same R- D performance of independent models with significantly fewer parameters.
• Computer Science
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2020
This paper proposes an Efficient Channel Attention (ECA) module, which only involves a handful of parameters while bringing clear performance gain, and develops a method to adaptively select kernel size of 1D convolution, determining coverage of local cross-channel interaction.