Exploring Structural Sparsity in Neural Image Compression

  title={Exploring Structural Sparsity in Neural Image Compression},
  author={Shan Yin and Fanyang Meng and Wen-Tao Tan and Chao Li and Youneng Bao and Yongsheng Liang and Wei Liu},
Neural image compression have reached or out-performed traditional methods (such as JPEG, BPG, WebP). However, their sophisticated network structures with cascaded convolution layers bring heavy computational burden for practical deployment. In this paper, we explore the structural sparsity in neural image compression network to obtain real-time acceleration without any specialized hardware design or algorithm. We propose a simple plug-in adaptive binary channel masking(ABCM) to judge the… 

Figures and Tables from this paper



Computationally Efficient Neural Image Compression

This work applies automatic network optimization techniques to reduce the computational complexity of a popular architecture used in neural image compression, analyzes the decoder complexity in execution runtime and explores the trade-offs between two distortion metrics, rate-distortion performance and run-time performance to design and research more computationally efficient Neural image compression.

Slimmable Compressive Autoencoders for Practical Neural Image Compression

This work proposes slimmable compressive autoencoders (SlimCAEs), where rate (R) and distortion (D) are jointly optimized for different capacities, and shows that a successful implementation of Slim-CAEs requires suitable capacity-specific RD tradeoffs.

Learned Image Compression With Discretized Gaussian Mixture Likelihoods and Attention Modules

This paper proposes to use discretized Gaussian Mixture Likelihoods to parameterize the distributions of latent codes, which can achieve a more accurate and flexible entropy model and achieves a state-of-the-art performance against existing learned compression methods.

Variational image compression with a scale hyperprior

It is demonstrated that this model leads to state-of-the-art image compression when measuring visual quality using the popular MS-SSIM index, and yields rate-distortion performance surpassing published ANN-based methods when evaluated using a more traditional metric based on squared error (PSNR).

Asymmetric Gained Deep Image Compression With Continuous Rate Adaptation

A continuously rate adjustable learned image compression framework, Asymmetric Gained Variational Autoencoder (AG-VAE), which utilizes a pair of gain units to achieve discrete rate adaptation in one single model with a negligible additional computation and the asymmetric Gaussian entropy model for more accurate entropy estimation.

CBANet: Towards Complexity and Bitrate Adaptive Deep Image Compression using a Single Network

A new deep image compression framework called Complexity and Bitrate Adaptive Network (CBANet), which aims to learn one single network to support variable bitrate coding under different computational complexity constraints, and proposes a new multi-branch complexity adaptive module.

Channel-Level Variable Quantization Network for Deep Image Compression

A channel-level variable quantization network to dynamically allocate more bitrate for significant channels and withdraw bitrates for negligible channels is proposed and achieves superior performance and can produce much better visual reconstructions.

Learning Efficient Convolutional Networks through Network Slimming

The approach is called network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy.

CompressAI: a PyTorch library and evaluation platform for end-to-end compression research

CompressAI is presented, a platform that provides custom operations, layers, models and tools to research, develop and evaluate end-to-end image and video compression codecs and is intended to be soon extended to the video compression domain.

Exploring Sparsity in Image Super-Resolution for Efficient Inference

A Sparse Mask SR (SMSR) network to learn sparse masks to prune redundant computation and achieves state-of-the-art performance with 41%/33%/27% FLOPs being reduced for ×2/3/4 SR.