• Corpus ID: 12189906

A study of the effect of JPG compression on adversarial images

@article{Dziugaite2016ASO,
  title={A study of the effect of JPG compression on adversarial images},
  author={Gintare Karolina Dziugaite and Zoubin Ghahramani and Daniel M. Roy},
  journal={ArXiv},
  year={2016},
  volume={abs/1608.00853}
}
Neural network image classifiers are known to be vulnerable to adversarial images, i.e., natural images which have been modified by an adversarial perturbation specifically designed to be imperceptible to humans yet fool the classifier. [] Key Result As the magnitude of the perturbations increases, JPG recompression alone is insufficient to reverse the effect.

Figures and Tables from this paper

Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction
TLDR
This paper proposes a straightforward method for detecting adversarial image examples, which can be directly deployed into unmodified off-the-shelf DNN models and raises the bar for defense-aware attacks.
Countermeasures Against L0 Adversarial Examples Using Image Processing and Siamese Networks
TLDR
It is observed that, while L0 corruptions modify as few pixels as possible, they tend to cause large-amplitude perturbations to the modified pixels, and this is considered an inherent limitation of L0 AEs, and a novel AE detector is proposed.
Research on the Influence of Kmeans Cluster Preprocessing on Adversarial Images
TLDR
The experimental results show that for small amplitude perturbation images, the use of smaller clustering values can largely reverse the decline of neural network accuracy, however, as the magnitude of the perturbations increases, the defensive effect of simple clustering becomes weaker.
Countering Adversarial Examples: Combining Input Transformation and Noisy Training
  • Cheng Zhang, Pan Gao
  • Computer Science
    2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
  • 2021
TLDR
This paper makes modifications to traditional JPEG compression algorithm which becomes more favorable for NN, and designs a NN-favored quantization table for compression based on an analysis of the frequency coefficient, which can improve defense efficiency while maintaining original accuracy.
AEPecker: L0 Adversarial Examples are not Strong Enough
TLDR
The main novelty of the proposed detector is that it converts the AE detection problem into an image comparison problem by exploiting the inherent characteristics of L0 AEs, and argues that L0 attacks are not strong enough.
Exploiting the Inherent Limitation of L0 Adversarial Examples
TLDR
This system demonstrates not only high AE detection accuracies, but also a notable capability to correct the classification results, and shows that the pre-processing technique, inpainting, used for detection can also work as an effective defense, which has a high probability of removing the adversarial influence of L0 perturbations.
A general metric for identifying adversarial images
  • S. Kumar
  • Computer Science, Environmental Science
    ArXiv
  • 2018
TLDR
This study attempts to overcome the generalization limitation by deriving a metric which reliably identifies adversarial images even when the approach taken by the adversary is unknown.
Adversarial Image Attacks Using Multi-Sample and Most-Likely Ensemble Methods
TLDR
This paper proposes the multi-sample ensemble method (MSEM) and most-likely ensemble method(MLEM) to generate adversarial attacks that successfully fool the classifier for images in both the digital and real worlds and shows that these methods not only can achieve higher success rates but also can survive in themulti-model defense tests.
Benchmarking Adversarial Robustness on Image Classification
  • Yinpeng Dong, Qi-An Fu, Jun Zhu
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
A comprehensive, rigorous, and coherent benchmark to evaluate adversarial robustness on image classification tasks is established and several important findings are drawn that can provide insights for future research.
Image Super-Resolution as a Defense Against Adversarial Attacks
TLDR
It is shown that deep image restoration networks learn mapping functions that can bring off-the-manifold adversarial samples onto the natural image manifold, thus restoring classification towards correct classes.
...
...

References

SHOWING 1-10 OF 14 REFERENCES
Exploring the space of adversarial images
  • Pedro Tabacof, E. Valle
  • Computer Science
    2016 International Joint Conference on Neural Networks (IJCNN)
  • 2016
TLDR
This work formalizes the problem of adversarial images given a pretrained classifier, showing that even in the linear case the resulting optimization problem is nonconvex and that a shallow classifier seems more robust to adversarial pictures than a deep convolutional network.
Towards Deep Neural Network Architectures Robust to Adversarial Examples
TLDR
Deep Contractive Network is proposed, a model with a new end-to-end training procedure that includes a smoothness penalty inspired by the contractive autoencoder (CAE) to increase the network robustness to adversarial examples, without a significant performance penalty.
Adversarial examples in the physical world
TLDR
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
TLDR
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs.
Hitting Depth : Investigating Robustness to Adversarial Examples in Deep Convolutional Neural Networks
TLDR
This work first validate assumptions about the generalization of gradient-based and pattern-based adversarial examples using VGGNet, and shows a process for visualizing and identifying changes in activations between adversarial images and their regular counterparts.
Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples
TLDR
This work introduces the first practical demonstration that cross-model transfer phenomenon enables attackers to control a remotely hosted DNN with no access to the model, its parameters, or its training data, and introduces the attack strategy of fitting a substitute model to the input-output pairs in this manner, then crafting adversarial examples based on this auxiliary model.
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Analysis of classifiers’ robustness to adversarial perturbations
TLDR
A general upper bound on the robustness of classifiers to adversarial perturbations is established, and the phenomenon of adversarial instability is suggested to be due to the low flexibility ofclassifiers, compared to the difficulty of the classification task (captured mathematically by the distinguishability measure).
Intriguing properties of neural networks
TLDR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
...
...