A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations

  title={A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations},
  author={Logan Engstrom and Dimitris Tsipras and Ludwig Schmidt and Aleksander Madry},
Recent work has shown that neural network–based vision classifiers exhibit a significant vulnerability to misclassifications caused by imperceptible but adversarial perturbations of their inputs. These perturbations, however, are purely pixel-wise and built out of loss function gradients of either the attacked model or its surrogate. As a result, they tend to be contrived and look pretty artificial. This might suggest that vulnerability to misclassification of slight input perturbations can… CONTINUE READING
Highly Cited
This paper has 46 citations. REVIEW CITATIONS

From This Paper

Figures, tables, and topics from this paper.


Publications citing this paper.
Showing 1-10 of 33 extracted citations


Publications referenced by this paper.
Showing 1-10 of 24 references

Deep Residual Learning for Image Recognition

2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) • 2016
View 5 Excerpts
Highly Influenced

Towards Evaluating the Robustness of Neural Networks

2017 IEEE Symposium on Security and Privacy (SP) • 2017
View 2 Excerpts

Similar Papers

Loading similar papers…