Adversarial Robustness Curves

@inproceedings{Gopfert2019AdversarialRC,
  title={Adversarial Robustness Curves},
  author={Christina Gopfert and Jan Philip G{\"o}pfert and Barbara Hammer},
  booktitle={PKDD/ECML Workshops},
  year={2019}
}
The existence of adversarial examples has led to considerable uncertainty regarding the trust one can justifiably put in predictions produced by automated systems. This uncertainty has, in turn, lead to considerable research effort in understanding adversarial robustness. In this work, we take first steps towards separating robustness analysis from the choice of robustness threshold and norm. We propose robustness curves as a more general view of the robustness behavior of a model and… Expand

Figures and Topics from this paper

Adversarial examples and where to find them
TLDR
This work analyzes where adversarial examples occur, in which ways they are peculiar, and how they are processed by robust models to provide concrete recommendations for anyone looking to train a robust model or to estimate how much robustness they should require for their operation. Expand
A general framework for defining and optimizing robustness
TLDR
This paper proposes a rigorous and flexible framework for defining different types of robustness that also help to explain the interplay between adversarial robustness and generalization and shows effective ways to minimize the corresponding loss functions. Expand
How to Compare Adversarial Robustness of Classifiers from a Global Perspective
TLDR
It is shown that point-wise measures fail to capture important global properties that are essential to reliably compare the robustness of different classifiers, and new ways in which robustness curves can be used to systematically uncover these properties are introduced. Expand
Improving Global Adversarial Robustness Generalization With Adversarially Trained GAN
TLDR
Experimental results in MNIST SVHN and CIFAR-10 datasets show that the proposed ATGAN method doesn’t rely on obfuscated gradients and achieves better global adversarial robustness generalization performance than the adversarially trained state-of-the-art CNNs. Expand

References

SHOWING 1-10 OF 21 REFERENCES
Learning with a Strong Adversary
TLDR
A new and simple way of finding adversarial examples is presented and experimentally shown to be efficient and greatly improves the robustness of the classification models produced. Expand
Robustness May Be at Odds with Accuracy
TLDR
It is shown that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization, and it is argued that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers. Expand
Adversarial Robustness May Be at Odds With Simplicity
TLDR
The hypothesis that robust classification may require more complex classifiers than standard classification is highlighted, showing that this hypothesis is indeed possible, by giving several theoretical examples of classification tasks and sets of "simple" classifiers. Expand
Measuring Neural Net Robustness with Constraints
TLDR
This work proposes metrics for measuring the robustness of a neural net and devise a novel algorithm for approximating these metrics based on an encoding of robustness as a linear program and generates more informative estimates of robusts metrics compared to estimates based on existing algorithms. Expand
Adversarial Attacks Hidden in Plain Sight
TLDR
A technique is composed that allows to hide adversarial attacks in regions of high complexity, such that they are imperceptible even to an astute observer with regards to human visual perception. Expand
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced. Expand
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
TLDR
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs. Expand
Towards Deep Neural Network Architectures Robust to Adversarial Examples
TLDR
Deep Contractive Network is proposed, a model with a new end-to-end training procedure that includes a smoothness penalty inspired by the contractive autoencoder (CAE) to increase the network robustness to adversarial examples, without a significant performance penalty. Expand
Adversarial Machine Learning at Scale
TLDR
This research applies adversarial training to ImageNet and finds that single-step attacks are the best for mounting black-box attacks, and resolution of a "label leaking" effect that causes adversarially trained models to perform better on adversarial examples than on clean examples. Expand
...
1
2
3
...