Denial-of-Service Attacks on Learned Image Compression

@article{Liu2022DenialofServiceAO,
  title={Denial-of-Service Attacks on Learned Image Compression},
  author={Kang Liu and Di Wu and Yiru Wang and Dan Feng and Benjamin Tan and Siddharth Garg},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.13253}
}
Deep learning techniques have shown promising results in image compression, with competitive bitrate and image reconstruction quality from compressed latent. How-ever, while image compression has progressed towards higher peak signal-to-noise ratio (PSNR) and fewer bits per pixel (bpp), their robustness to corner-case images has never received deliberation. In this work, we, for the first time, investigate the robustness of image compression systems where imperceptible perturbation of input… 

References

SHOWING 1-10 OF 31 REFERENCES
Learned Image Compression With Discretized Gaussian Mixture Likelihoods and Attention Modules
TLDR
This paper proposes to use discretized Gaussian Mixture Likelihoods to parameterize the distributions of latent codes, which can achieve a more accurate and flexible entropy model and achieves a state-of-the-art performance against existing learned compression methods.
Joint Autoregressive and Hierarchical Priors for Learned Image Compression
TLDR
It is found that in terms of compression performance, autoregressive and hierarchical priors are complementary and can be combined to exploit the probabilistic structure in the latents better than all previous learned models.
Variational image compression with a scale hyperprior
TLDR
It is demonstrated that this model leads to state-of-the-art image compression when measuring visual quality using the popular MS-SSIM index, and yields rate-distortion performance surpassing published ANN-based methods when evaluated using a more traditional metric based on squared error (PSNR).
Towards Robust Neural Image Compression: Adversarial Attack and Model Finetuning
TLDR
The adversarial attack is performed by injecting a small amount of noise perturbation to original source images, and then encoded these adversarial examples using prevailing learnt image compression models to refine pretrained models, revealing the general vulnerability of existing methods.
Image Compression with Recurrent Neural Network and Generalized Divisive Normalization
TLDR
Experimental results demonstrated that the pro-posed variable-rate framework with novel blocks outperforms existing methods and standard image codecs, such as George’s and JPEG in terms of image similarity.
CompressAI: a PyTorch library and evaluation platform for end-to-end compression research
TLDR
CompressAI is presented, a platform that provides custom operations, layers, models and tools to research, develop and evaluate end-to-end image and video compression codecs and is intended to be soon extended to the video compression domain.
Adversarial Perturbation Attacks on ML-based CAD
TLDR
An adversarial retraining strategy is proposed to improve the robustness of CNN-based hotspot detection and it is shown that this strategy significantly improves robustness (by a factor of ~3) against adversarial attacks without compromising classification accuracy.
Benchmarking Adversarial Robustness on Image Classification
  • Yinpeng Dong, Qi-An Fu, Jun Zhu
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
A comprehensive, rigorous, and coherent benchmark to evaluate adversarial robustness on image classification tasks is established and several important findings are drawn that can provide insights for future research.
High-Fidelity Generative Image Compression
TLDR
This work extensively study how to combine Generative Adversarial Networks and learned compression to obtain a state-of-the-art generative lossy compression system and bridges the gap between rate-distortion-perception theory and practice.
Sponge Examples: Energy-Latency Attacks on Neural Networks
TLDR
It is shown how adversaries can exploit carefully-crafted sponge examples, which are inputs designed to maximise energy consumption and latency, to drive machine learning (ML) systems towards their worst-case performance.
...
...