Corpus ID: 195776342

Accurate, reliable and fast robustness evaluation

@article{Brendel2019AccurateRA,
  title={Accurate, reliable and fast robustness evaluation},
  author={W. Brendel and Jonas Rauber and Matthias K{\"u}mmerer and Ivan Ustyuzhaninov and M. Bethge},
  journal={ArXiv},
  year={2019},
  volume={abs/1907.01003}
}
  • W. Brendel, Jonas Rauber, +2 authors M. Bethge
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturbations has moved from a peculiar phenomenon to a core issue in Deep Learning. Despite much attention, however, progress towards more robust models is significantly impaired by the difficulty of evaluating the robustness of neural network models. Today's methods are either fast but brittle (gradient-based attacks), or they are fairly reliable but slow (score- and decision-based attacks). We here… CONTINUE READING
    13 Citations

    Figures, Tables, and Topics from this paper.

    Paper Mentions

    Sampled Nonlocal Gradients for Stronger Adversarial Attacks
    Improving Adversarial Robustness Through Progressive Hardening
    • 6
    • PDF
    An Alternative Surrogate Loss for PGD-based Adversarial Testing
    • 24
    • PDF
    Are Perceptually-Aligned Gradients a General Property of Robust Classifiers?
    • 12
    • PDF

    References

    SHOWING 1-10 OF 26 REFERENCES
    Towards Evaluating the Robustness of Neural Networks
    • 2,723
    • Highly Influential
    • PDF
    MixTrain: Scalable Training of Formally Robust Neural Networks
    • 65
    • Highly Influential
    Towards Deep Learning Models Resistant to Adversarial Attacks
    • 2,513
    • Highly Influential
    • PDF
    Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses
    • 71
    • PDF
    Boosting Adversarial Attacks with Momentum
    • Y. Dong, Fangzhou Liao, +4 authors J. Li
    • Computer Science, Mathematics
    • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
    • 2018
    • 540
    • PDF
    Decision-Based Adversarial Attacks : Reliable Attacks Against Black-box Machine Learning Models
    • Huichen Lihuichen
    • 2017
    • 115
    • Highly Influential
    EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
    • 246
    • Highly Influential
    • PDF
    Provable defenses against adversarial examples via the convex outer adversarial polytope
    • 620
    • Highly Influential
    • PDF
    The Limitations of Deep Learning in Adversarial Settings
    • 1,729
    • Highly Influential
    • PDF