Towards Deep Learning Models Resistant to Adversarial Attacks
@article{Madry2018TowardsDL, title={Towards Deep Learning Models Resistant to Adversarial Attacks}, author={A. Madry and Aleksandar Makelov and L. Schmidt and D. Tsipras and Adrian Vladu}, journal={ArXiv}, year={2018}, volume={abs/1706.06083} }
Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. [...] Key Method Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary.Expand Abstract
Supplemental Code
Github Repo
Via Papers with Code
A challenge to explore adversarial robustness of neural networks on CIFAR10.
Github Repo
Via Papers with Code
A challenge to explore adversarial robustness of neural networks on MNIST.
Figures, Tables, and Topics from this paper
Paper Mentions
2,905 Citations
Adversarial Robustness Against the Union of Multiple Perturbation Models
- Computer Science, Mathematics
- ICML
- 2020
- 19
- Highly Influenced
- PDF
Hardening Deep Neural Networks via Adversarial Model Cascades
- Computer Science, Mathematics
- 2019 International Joint Conference on Neural Networks (IJCNN)
- 2019
- 7
- PDF
Towards Natural Robustness Against Adversarial Examples
- Computer Science
- ArXiv
- 2020
- Highly Influenced
- PDF
On the Connection between Differential Privacy and Adversarial Robustness in Machine Learning
- Computer Science
- ArXiv
- 2018
- 13
- Highly Influenced
Learning to Disentangle Robust and Vulnerable Features for Adversarial Detection
- Computer Science, Mathematics
- ArXiv
- 2019
- Highly Influenced
- PDF
Defending Against Adversarial Attacks Using Random Forest
- Computer Science
- 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
- 2019
- 1
- PDF
Adversarial Example Detection and Classification With Asymmetrical Adversarial Training
- Computer Science, Mathematics
- ICLR 2020
- 2019
- 3
- PDF
Defending Against Adversarial Samples Without Security through Obscurity
- Computer Science
- 2018 IEEE International Conference on Data Mining (ICDM)
- 2018
- 2
- PDF
References
SHOWING 1-10 OF 42 REFERENCES
Towards Evaluating the Robustness of Neural Networks
- Computer Science
- 2017 IEEE Symposium on Security and Privacy (SP)
- 2017
- 3,044
- PDF
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
- Computer Science, Mathematics
- 2016 IEEE Symposium on Security and Privacy (SP)
- 2016
- 1,566
- PDF
The Limitations of Deep Learning in Adversarial Settings
- Computer Science, Mathematics
- 2016 IEEE European Symposium on Security and Privacy (EuroS&P)
- 2016
- 1,908
- PDF
Towards Deep Neural Network Architectures Robust to Adversarial Examples
- Computer Science, Mathematics
- ICLR
- 2015
- 497
- PDF
Towards Robust Deep Neural Networks with BANG
- Computer Science
- 2018 IEEE Winter Conference on Applications of Computer Vision (WACV)
- 2018
- 48
- PDF
Adversarial Machine Learning at Scale
- Computer Science, Mathematics
- ICLR
- 2017
- 1,266
- Highly Influential
- PDF