One Pixel Attack for Fooling Deep Neural Networks

@article{Su2019OnePA,
  title={One Pixel Attack for Fooling Deep Neural Networks},
  author={Jiawei Su and Danilo Vasconcellos Vargas and K. Sakurai},
  journal={IEEE Transactions on Evolutionary Computation},
  year={2019},
  volume={23},
  pages={828-841}
}
Recent research has revealed that the output of deep neural networks (DNNs) can be easily altered by adding relatively small perturbations to the input vector. [...] Key Method It requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE. The results show that 67.97% of the natural images in Kaggle CIFAR-10 test dataset and 16.04% of the ImageNet (ILSVRC 2012) test images can be perturbed to at least one target class by modifying just one…Expand
874 Citations
One Sparse Perturbation to Fool them All, almost Always!
  • Highly Influenced
  • PDF
XGAN: adversarial attacks with GAN
  • 1
Evaluating and Improving Adversarial Attacks on DNN-Based Modulation Recognition
Exploring and Expanding the One-Pixel Attack
Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet
  • 7
  • PDF
Towards Imperceptible Adversarial Image Patches Based on Network Explanations
  • PDF
Testing Convolutional Neural Network using Adversarial Attacks on Potential Critical Pixels
  • 1
Empirical Evaluation on Robustness of Deep Convolutional Neural Networks Activation Functions Against Adversarial Perturbation
  • 1
DLA: Dense-Layer-Analysis for Adversarial Example Detection
  • 3
  • PDF
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 73 REFERENCES
Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction
  • 85
  • Highly Influential
  • PDF
Simple Black-Box Adversarial Attacks on Deep Neural Networks
  • 143
  • PDF
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
  • 725
  • PDF
Adversarial Perturbations Against Deep Neural Networks for Malware Classification
  • 270
  • PDF
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
  • 1,608
  • PDF
The Limitations of Deep Learning in Adversarial Settings
  • 1,967
  • PDF
Towards Evaluating the Robustness of Neural Networks
  • 3,166
  • PDF
Adversarial Diversity and Hard Positive Generation
  • 149
  • PDF
Universal Adversarial Perturbations
  • 1,131
  • Highly Influential
  • PDF
Adversarial Examples: Attacks and Defenses for Deep Learning
  • 597
  • PDF
...
1
2
3
4
5
...