Explaining and Harnessing Adversarial Examples
@article{Goodfellow2015ExplainingAH, title={Explaining and Harnessing Adversarial Examples}, author={Ian J. Goodfellow and Jonathon Shlens and Christian Szegedy}, journal={CoRR}, year={2015}, volume={abs/1412.6572} }
Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial… CONTINUE READING
Supplemental Content
Github Repo
Via Papers with Code
[Tensorflow.js] AdVis: Exploring real-time Adversarial Attacks in the browser with Fast Gradient Sign Method.
Paper Mentions
News Article
News Article
News Article
6,220 Citations
Predicting Adversarial Examples with High Confidence
- Computer Science, Mathematics
- ArXiv
- 2018
- 6
- Highly Influenced
- PDF
Hitting Depth : Investigating Robustness to Adversarial Examples in Deep Convolutional Neural Networks
- 2016
- 6
- Highly Influenced
- PDF
Vulnerability of classifiers to evolutionary generated adversarial examples
- Computer Science, Medicine
- Neural Networks
- 2020
- 2
Adversarial Examples on Object Recognition: A Comprehensive Survey
- Computer Science
- ArXiv
- 2020
- 5
- Highly Influenced
- PDF
Are Accuracy and Robustness Correlated
- Computer Science
- 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)
- 2016
- 38
- PDF
Principal Component Adversarial Example
- Computer Science, Medicine
- IEEE Transactions on Image Processing
- 2020
- 1
- Highly Influenced
- PDF
Learning Universal Adversarial Perturbations with Generative Models
- Computer Science
- 2018 IEEE Security and Privacy Workshops (SPW)
- 2018
- 50
- PDF
References
SHOWING 1-10 OF 22 REFERENCES
Towards Deep Neural Network Architectures Robust to Adversarial Examples
- Computer Science, Mathematics
- ICLR
- 2015
- 486
- PDF
Dropout: a simple way to prevent neural networks from overfitting
- Computer Science
- J. Mach. Learn. Res.
- 2014
- 20,868
- Highly Influential
- PDF
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
- Computer Science
- 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2015
- 1,806
- Highly Influential
- PDF
Going deeper with convolutions
- Computer Science
- 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2015
- 21,787
- PDF