Accurate, reliable and fast robustness evaluation
@inproceedings{Brendel2019AccurateRA, title={Accurate, reliable and fast robustness evaluation}, author={W. Brendel and Jonas Rauber and Matthias K{\"u}mmerer and Ivan Ustyuzhaninov and M. Bethge}, booktitle={NeurIPS}, year={2019} }
Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturbations has moved from a peculiar phenomenon to a core issue in Deep Learning. Despite much attention, however, progress towards more robust models is significantly impaired by the difficulty of evaluating the robustness of neural network models. Today's methods are either fast but brittle (gradient-based attacks), or they are fairly reliable but slow (score- and decision-based attacks). We here… CONTINUE READING
Supplemental Code
Github Repo
Via Papers with Code
Notes on using the Brendel & Bethge attack with Foolbox and Cleverhans
Figures, Tables, and Topics from this paper
Paper Mentions
17 Citations
Sampled Nonlocal Gradients for Stronger Adversarial Attacks
- Computer Science
- ArXiv
- 2020
- Highly Influenced
- PDF
Improving Adversarial Robustness Through Progressive Hardening
- Computer Science, Mathematics
- ArXiv
- 2020
- 8
- PDF
How to compare adversarial robustness of classifiers from a global perspective
- Computer Science, Mathematics
- 2020
- PDF
Towards Defending Multiple Adversarial Perturbations via Gated Batch Normalization
- Computer Science
- ArXiv
- 2020
- Highly Influenced
- PDF
Voting based ensemble improves robustness of defensive models
- Computer Science, Mathematics
- ArXiv
- 2020
- PDF
Adaptive iterative attack towards explainable adversarial robustness
- Computer Science
- Pattern Recognit.
- 2020
- 4
Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX
- Computer Science
- J. Open Source Softw.
- 2020
- 6
- PDF
Improving Ensemble Robustness by Collaboratively Promoting and Demoting Adversarial Robustness
- Computer Science
- ArXiv
- 2020
- PDF
Using Learning Dynamics to Explore the Role of Implicit Regularization in Adversarial Examples
- Computer Science, Mathematics
- ArXiv
- 2020
- PDF
An Alternative Surrogate Loss for PGD-based Adversarial Testing
- Computer Science, Mathematics
- ArXiv
- 2019
- 27
- PDF
References
SHOWING 1-10 OF 26 REFERENCES
Towards Evaluating the Robustness of Neural Networks
- Computer Science
- 2017 IEEE Symposium on Security and Privacy (SP)
- 2017
- 2,949
- Highly Influential
- PDF
MixTrain: Scalable Training of Formally Robust Neural Networks
- Computer Science, Mathematics
- ArXiv
- 2018
- 68
- Highly Influential
Towards Deep Learning Models Resistant to Adversarial Attacks
- Computer Science, Mathematics
- ICLR
- 2018
- 2,781
- Highly Influential
- PDF
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses
- Computer Science
- 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
- 79
- PDF
Boosting Adversarial Attacks with Momentum
- Computer Science
- 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
- 610
- PDF
Decision-Based Adversarial Attacks : Reliable Attacks Against Black-box Machine Learning Models
- 2017
- 114
- Highly Influential
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
- Computer Science, Mathematics
- AAAI
- 2018
- 264
- Highly Influential
- PDF
Provable defenses against adversarial examples via the convex outer adversarial polytope
- Computer Science, Mathematics
- ICML
- 2018
- 662
- Highly Influential
- PDF
The Limitations of Deep Learning in Adversarial Settings
- Computer Science, Mathematics
- 2016 IEEE European Symposium on Security and Privacy (EuroS&P)
- 2016
- 1,858
- Highly Influential
- PDF