Online Meta Adaptation for Variable-Rate Learned Image Compression

@article{Jiang2022OnlineMA,
  title={Online Meta Adaptation for Variable-Rate Learned Image Compression},
  author={Wei Jiang and Wei Wang and Songnan Li and Shan Liu},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
  year={2022},
  pages={497-505}
}
  • Wei JiangWei Wang Shan Liu
  • Published 16 November 2021
  • Computer Science
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
This work addresses two major issues of end-to-end learned image compression (LIC) based on deep neural networks: variable-rate learning where separate networks are required to generate compressed images with varying qualities, and the train-test mismatch between differentiable approximate quantization and true hard quantization. We introduce an online meta-learning (OML) setting for LIC, which combines ideas from meta learning and online learning in the conditional variational auto-encoder… 
1 Citations

Figures from this paper

A Secure and Efficient Multi-Object Grasping Detection Approach for Robotic Arms

This approach realizes the arbitrary grasp planning of the robot arm and considers the graspency and information security, and the encoder and decoder trained by GAN enable the images to be encrypted while compressing, which ensures the security of privacy.

References

SHOWING 1-10 OF 35 REFERENCES

Conditional Probability Models for Deep Image Compression

This paper proposes a new technique to navigate the rate-distortion trade-off for an image compression auto-encoder by using a context model: A 3D-CNN which learns a conditional probability model of the latent distribution of the auto- Encoder.

Variable Rate Deep Image Compression With a Conditional Autoencoder

The proposed scheme provides a better rate-distortion trade-off than the traditional variable-rate image compression codecs such as JPEG2000 and BPG and shows comparable and sometimes better performance than the state-of-the-art learned image compression models that deploy multiple networks trained for varying rates.

Variable Rate Deep Image Compression With Modulated Autoencoder

Modulated autoencoders (MAEs) are proposed, where the representations of a shared autoencoder are adapted to the specific R-D tradeoff via a modulation network, and can achieve almost the same R- D performance of independent models with significantly fewer parameters.

Variational image compression with a scale hyperprior

It is demonstrated that this model leads to state-of-the-art image compression when measuring visual quality using the popular MS-SSIM index, and yields rate-distortion performance surpassing published ANN-based methods when evaluated using a more traditional metric based on squared error (PSNR).

Joint Autoregressive and Hierarchical Priors for Learned Image Compression

It is found that in terms of compression performance, autoregressive and hierarchical priors are complementary and can be combined to exploit the probabilistic structure in the latents better than all previous learned models.

Soft then Hard: Rethinking the Quantization in Neural Image Compression

This work proposes a novel soft-then-hard quantization strategy for neural image compression that first learns an expressive latent space softly, then eliminates the train-test mismatch with hard quantization.

End-to-end Optimized Image Compression

Across an independent set of test images, it is found that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods, and a dramatic improvement in visual quality is observed, supported by objective quality estimates using MS-SSIM.

MetaHDR: Model-Agnostic Meta-Learning for HDR Image Reconstruction

This work proposes ”ModelAgnostic Meta-Learning for HDR Image Reconstruction” (MetaHDR), which applies meta-learning to the LDR-to-HDR conversion problem using existing HDR datasets and uses a meta- learning framework that learns a set of meta-parameters which capture the common structure consistent across all LDR -to-hDR conversion tasks.

Meta-Transfer Learning for Zero-Shot Super-Resolution

Meta-Transfer Learning for Zero-Shot Super-Resolution (MZSR) is presented, which leverages ZSSR and can exploit both external and internal information, where one single gradient update can yield quite considerable results.

Full Resolution Image Compression with Recurrent Neural Networks

This is the first neural network architecture that is able to outperform JPEG at image compression across most bitrates on the rate-distortion curve on the Kodak dataset images, with and without the aid of entropy coding.