• Corpus ID: 204509383

Neural Image Compression via Non-Local Attention Optimization and Improved Context Modeling

@article{Chen2019NeuralIC,
  title={Neural Image Compression via Non-Local Attention Optimization and Improved Context Modeling},
  author={Tong Chen and Haojie Liu and Zhan Ma and Qiu Shen and Xun Cao and Yao Wang},
  journal={arXiv: Image and Video Processing},
  year={2019}
}
This paper proposes a novel Non-Local Attention optmization and Improved Context modeling-based image compression (NLAIC) algorithm, which is built on top of the deep nerual network (DNN)-based variational auto-encoder (VAE) structure. Our NLAIC 1) embeds non-local network operations as non-linear transforms in the encoders and decoders for both the image and the latent representation probability information (known as hyperprior) to capture both local and global correlations, 2) applies… 

GOLLIC: Learning Global Context beyond Patches for Lossless High-Resolution Image Compression

A hierarchical latent variable model with a global context to capture the long-term dependencies of high- resolution images and improves compression ratio compared to the engineered codecs and deep learning models on three benchmark high-resolution image datasets.

A Unified End-to-End Framework for Efficient Deep Image Compression

Experimental results demonstrate that the proposed approach outperforms the current state-of-the-art image compression methods and is up to more than 150 times faster in terms of decoding speed when compared with Minnen's method.

Learned Block-Based Hybrid Image Compression

This paper introduces explicit intra prediction into a learned image compression framework to utilize the relation among adjacent blocks and proposes a contextual prediction module (CPM) to better capture long-range correlations by utilizing the strip pooling to extract the most relevant information in neighboring latent space, thus achieving effective information prediction.

End-to-End Image Compression via Attention-Guided Information-Preserving Module

This work designs an information-preserving compression framework using the attention mechanism, where the dual-branch architecture is utilized to prevent changes in data distribution and a cross-channel progressive enhancement network is designed by taking advantage of the relations among different channels.

Causal Contextual Prediction for Learned Image Compression

A causal context model is proposed that separates the latents across channels and makes use of channel-wise relationships to generate highly informative adjacent contexts and a causal global prediction model to find global reference points for accurate predictions of undecoded points.

Leveraging progressive model and overfitting for efficient learned image compression

A powerful and flexible LIC framework with multi-scale progressive (MSP) probability model and latent representation overfitting (LOF) technique is introduced, resulting in over 20 times speedup when decoding 2K images.

A Cross Channel Context Model for Latents in Deep Image Compression

This paper presents a cross channel context model for latents in deep image compression that is combined with the joint autoregressive and hierarchical prior entropy model, and achieves BD-rate reductions when optimized for the MS-SSIM metric.

End-to-End Learning for Video Frame Compression with Self-Attention

This paper proposes an end-to-end learned system for compressing video frames that learns deep embeddings of frames and encodes their difference in latent space instead of relying on pixel-space motion.

Object-Based Image Coding: A Learning-Driven Revisit

This work has proposed to apply the element-wise masking and compression by devising an object segmentation network for image layer decomposition, and parallel convolution-based neural image compression networks to process masked foreground objects and background scene separately.

Improved Deep Image Compression with Joint Optimization of Cross Channel Context Model And Generalized Loop Filter

The proposed cross channel context model and generalized loop filter (CCCMGLF) are integrated into the deep image compression framework and jointly optimized to improve the coding performance.

References

SHOWING 1-10 OF 50 REFERENCES

Practical Stacked Non-local Attention Modules for Image Compression

This paper uses a non-local module to capture global correlations effectively that can’t be offered by traditional convolutional neural networks (CNNs) and jointly takes the hyperpriors and autoregressive priors for conditional probability estimation.

Conditional Probability Models for Deep Image Compression

This paper proposes a new technique to navigate the rate-distortion trade-off for an image compression auto-encoder by using a context model: A 3D-CNN which learns a conditional probability model of the latent distribution of the auto- Encoder.

Learned Scalable Image Compression with Bidirectional Context Disentanglement Network

A learned scalable/progressive image compression scheme based on deep neural networks (DNN), named Bidirectional Context Disentanglement Network (BCD-Net), which outperforms the state-of-the-art DNN-based scalable image compression methods in both PSNR and MS-SSIM metrics.

Variational image compression with a scale hyperprior

It is demonstrated that this model leads to state-of-the-art image compression when measuring visual quality using the popular MS-SSIM index, and yields rate-distortion performance surpassing published ANN-based methods when evaluated using a more traditional metric based on squared error (PSNR).

Layered Image Compression Using Scalable Auto-Encoder

A novel convolutional neural network (CNN) based image compression framework via scalable auto-encoder (SAE) that has similar rate-distortion performance in the low-to-medium rate range as the state-of-the-art CNN based image codec over a standard public image dataset.

Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks

A method for lossy image compression based on recurrent, convolutional neural networks that outperforms BPG, WebP, JPEG2000, and JPEG as measured by MS-SSIM is proposed and it is shown that training with a pixel-wise loss weighted by SSIM increases reconstruction quality according to multiple metrics.

Non-Local Recurrent Network for Image Restoration

A non- local recurrent network (NLRN) is proposed as the first attempt to incorporate non-local operations into a recurrent neural network (RNN) for image restoration and achieves superior results to state-of-the-art methods with much fewer parameters.

Joint Autoregressive and Hierarchical Priors for Learned Image Compression

It is found that in terms of compression performance, autoregressive and hierarchical priors are complementary and can be combined to exploit the probabilistic structure in the latents better than all previous learned models.

Deep Image Compression via End-to-End Learning

We present a lossy image compression method based on deep convolutional neural networks (CNNs), which outperforms the existing BPG, WebP, JPEG2000 and JPEG as measured via multi-scale structural

Full Resolution Image Compression with Recurrent Neural Networks

This is the first neural network architecture that is able to outperform JPEG at image compression across most bitrates on the rate-distortion curve on the Kodak dataset images, with and without the aid of entropy coding.