One Pixel Attack for Fooling Deep Neural Networks
@article{Su2019OnePA, title={One Pixel Attack for Fooling Deep Neural Networks}, author={Jiawei Su and Danilo Vasconcellos Vargas and K. Sakurai}, journal={IEEE Transactions on Evolutionary Computation}, year={2019}, volume={23}, pages={828-841} }
Recent research has revealed that the output of deep neural networks (DNNs) can be easily altered by adding relatively small perturbations to the input vector. [...] Key Method It requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE. The results show that 67.97% of the natural images in Kaggle CIFAR-10 test dataset and 16.04% of the ImageNet (ILSVRC 2012) test images can be perturbed to at least one target class by modifying just one…Expand
Supplemental Content
Github Repo
Via Papers with Code
Pytorch reimplementation of "One pixel attack for fooling deep neural networks"
Figures, Tables, and Topics from this paper
Paper Mentions
News Article
874 Citations
One Sparse Perturbation to Fool them All, almost Always!
- Computer Science, Mathematics
- ArXiv
- 2020
- Highly Influenced
- PDF
Evaluating and Improving Adversarial Attacks on DNN-Based Modulation Recognition
- Computer Science
- GLOBECOM 2020 - 2020 IEEE Global Communications Conference
- 2020
Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet
- Computer Science, Mathematics
- IEEE transactions on pattern analysis and machine intelligence
- 2020
- 7
- PDF
Towards Imperceptible Adversarial Image Patches Based on Network Explanations
- Computer Science, Engineering
- ArXiv
- 2020
- PDF
Testing Convolutional Neural Network using Adversarial Attacks on Potential Critical Pixels
- Computer Science
- 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)
- 2020
- 1
Empirical Evaluation on Robustness of Deep Convolutional Neural Networks Activation Functions Against Adversarial Perturbation
- Computer Science
- 2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)
- 2018
- 1
DLA: Dense-Layer-Analysis for Adversarial Example Detection
- Computer Science
- 2020 IEEE European Symposium on Security and Privacy (EuroS&P)
- 2020
- 3
- PDF
References
SHOWING 1-10 OF 73 REFERENCES
Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction
- Computer Science
- IEEE Transactions on Dependable and Secure Computing
- 2021
- 85
- Highly Influential
- PDF
Simple Black-Box Adversarial Attacks on Deep Neural Networks
- Computer Science
- 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
- 2017
- 143
- PDF
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
- Computer Science
- NDSS
- 2018
- 725
- PDF
Adversarial Perturbations Against Deep Neural Networks for Malware Classification
- Computer Science
- ArXiv
- 2016
- 270
- PDF
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
- Computer Science, Mathematics
- 2016 IEEE Symposium on Security and Privacy (SP)
- 2016
- 1,608
- PDF
The Limitations of Deep Learning in Adversarial Settings
- Computer Science, Mathematics
- 2016 IEEE European Symposium on Security and Privacy (EuroS&P)
- 2016
- 1,967
- PDF
Towards Evaluating the Robustness of Neural Networks
- Computer Science
- 2017 IEEE Symposium on Security and Privacy (SP)
- 2017
- 3,166
- PDF
Adversarial Diversity and Hard Positive Generation
- Computer Science
- 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
- 2016
- 149
- PDF
Universal Adversarial Perturbations
- Computer Science, Mathematics
- 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
- 1,131
- Highly Influential
- PDF
Adversarial Examples: Attacks and Defenses for Deep Learning
- Computer Science, Mathematics
- IEEE Transactions on Neural Networks and Learning Systems
- 2019
- 597
- PDF