Improving Transferability of Adversarial Examples With Input Diversity
@article{Xie2019ImprovingTO, title={Improving Transferability of Adversarial Examples With Input Diversity}, author={Cihang Xie and Zhishuai Zhang and Jianyu Wang and Yuyin Zhou and Zhou Ren and Alan Loddon Yuille}, journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2019}, pages={2725-2734} }
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples --- crafted by adding human-imperceptible perturbations to clean images. [] Key Method Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration.
Figures and Tables from this paper
377 Citations
Enhancing the Transferability of Adversarial Attacks with Input Transformation
- Computer Science
- 2021
This work found that random transformation of image size can eliminate overfitting in the generation of adversarial examples and improve their transferability, and proposes an adversarial example generation method which can be integrated with Fast Gradient Sign Method-related methods to build a stronger gradient-based attack.
Improving Adversarial Transferability with Gradient Refining
- Computer ScienceArXiv
- 2021
This paper proposes a method named Gradient Refining, which can further improve the adversarial transferability by correcting negative gradients introduced by input diversity through multiple transformations, and is generally applicable to many gradient-based attack methods combined with input diversity.
Improving the Transferability of Adversarial Examples with the Adam Optimizer
- Computer ScienceArXiv
- 2020
This study combines an improved Adam gradient descent algorithm with the iterative gradient-based attack method and the resulting Adam Iterative Fast Gradient Method is used to improve the transferability of adversarial examples.
Random Transformation of Image Brightness for Adversarial Attack
- Computer ScienceJ. Intell. Fuzzy Syst.
- 2022
An adversarial example generation method, which can be integrated with Fast Gradient Sign Method (FGSM)-related methods to build a more robust gradient-based attack and to generate adversarial examples with better transferability.
Enhancing transferability of adversarial examples via rotation-invariant attacks
- Computer ScienceIET Comput. Vis.
- 2022
Improved transferability is proposed via a rotation-invariant attack method that maximizes the loss function w.r.t the random rotated image instead of the original input at each iteration, thus mitigating the high correlation between the adversarial examples and the source models and making the adversaria examples more transferable.
Boosting the Transferability of Adversarial Examples with More Efficient Data Augmentation
- Computer ScienceJournal of Physics: Conference Series
- 2022
This article proposes a cam(class activation map)-guided data augmentation attack method, which can improve the transferability of adversarial examples, and proves that the proposed method can generate more transferable adversarialExamples.
Defense-guided Transferable Adversarial Attacks
- Computer ScienceArXiv
- 2020
A max-min framework inspired by input transformations is designed, which is benificial to both the adversarial attack and defense and is expected to be a benchmark for assessing the robustness of deep models.
Boosting Adversarial Transferability through Enhanced Momentum
- Computer ScienceBMVC
- 2021
This work proposes an enhanced momentum iterative gradient-based method that accumulates the gradient so as to stabilize the update direction and escape from poor local maxima of momentum-based methods.
Boosting Adversarial Attacks on Neural Networks with Better Optimizer
- Computer ScienceSecur. Commun. Networks
- 2021
A modified Adam gradient descent algorithm is combined with the iterative gradient-based attack method to improve the transferability of adversarial examples and shows that the proposed method offers a higher attack success rate than existing iterative methods.
Enhancing Adversarial Examples Transferability via Ensemble Feature Manifolds
- Computer ScienceAdvM @ ACM Multimedia
- 2021
This work proposes a novel feature attack method called Features-Ensemble Generative Adversarial Network (FEGAN), which ensembles multiple feature manifolds to capture intrinsic adversarial information that is most likely to cause misclassification of many models, thereby improving the transferability of adversarial examples.
References
SHOWING 1-10 OF 41 REFERENCES
Mitigating adversarial effects through randomization
- Computer ScienceICLR
- 2018
This paper proposes to utilize randomization at inference time to mitigate adversarial effects, and uses two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input image in a random manner.
Boosting Adversarial Attacks with Momentum
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
- Computer ScienceICLR
- 2018
Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of…
Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
High-level representation guided denoiser (HGD) is proposed as a defense for image classification by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image.
Ensemble Adversarial Training: Attacks and Defenses
- Computer ScienceICLR
- 2018
This work finds that adversarial training remains vulnerable to black-box attacks, where perturbations computed on undefended models are transferred to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step.
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
- Computer ScienceICLR
- 2018
The proposed Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against adversarial perturbations, is empirically shown to be consistently effective against different attack methods and improves on existing defense strategies.
MagNet: A Two-Pronged Defense against Adversarial Examples
- Computer ScienceCCS
- 2017
MagNet, a framework for defending neural network classifiers against adversarial examples, is proposed and it is shown empirically that MagNet is effective against the most advanced state-of-the-art attacks in blackbox and graybox scenarios without sacrificing false positive rate on normal examples.
Deflecting Adversarial Attacks with Pixel Deflection
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This paper presents an algorithm to process an image so that classification accuracy is significantly preserved in the presence of adversarial manipulations, and demonstrates experimentally that the combination of these techniques enables the effective recovery of the true class, against a variety of robust attacks.
Towards Evaluating the Robustness of Neural Networks
- Computer Science2017 IEEE Symposium on Security and Privacy (SP)
- 2017
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
Stochastic Activation Pruning for Robust Adversarial Defense
- Computer ScienceICLR
- 2018
Stochastic Activation Pruning (SAP) is proposed, a mixed strategy for adversarial defense that prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate.