On Detecting Adversarial Perturbations

  title={On Detecting Adversarial Perturbations},
  author={Jan Hendrik Metzen and Tim Genewein and Volker Fischer and Bastian Bischoff},
Machine learning and deep learning in particular has advanced tremendously on perceptual tasks in recent years. However, it remains vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to a human. In this work, we propose to augment deep neural networks with a small “detector” subnetwork which is trained on the binary classification task of distinguishing genuine data from data containing adversarial… CONTINUE READING
Highly Influential
This paper has highly influenced 13 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 235 citations. REVIEW CITATIONS
Related Discussions
This paper has been referenced on Twitter 40 times. VIEW TWEETS

From This Paper

Figures, tables, and topics from this paper.


Publications citing this paper.
Showing 1-10 of 156 extracted citations

MagNet: A Two-Pronged Defense against Adversarial Examples

ACM Conference on Computer and Communications Security • 2017
View 5 Excerpts
Highly Influenced

SafetyNet: Detecting and Rejecting Adversarial Examples Robustly

2017 IEEE International Conference on Computer Vision (ICCV) • 2017
View 10 Excerpts
Highly Influenced

235 Citations

Citations per Year
Semantic Scholar estimates that this publication has 235 citations based on the available data.

See our FAQ for additional information.


Publications referenced by this paper.
Showing 1-10 of 22 references

Universal Adversarial Perturbations

2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) • 2017
View 20 Excerpts
Highly Influenced

Intriguing properties of neural networks

ArXiv • 2013
View 6 Excerpts
Highly Influenced

Deep Residual Learning for Image Recognition

2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) • 2016
View 4 Excerpts
Highly Influenced

DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks

2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) • 2016
View 17 Excerpts
Highly Influenced

Adam: A Method for Stochastic Optimization

View 1 Excerpt
Highly Influenced

Towards Evaluating the Robustness of Neural Networks

2017 IEEE Symposium on Security and Privacy (SP) • 2017
View 2 Excerpts

Similar Papers

Loading similar papers…