Towards Deep Learning Models Resistant to Adversarial Attacks

@article{Madry2017TowardsDL,
  title={Towards Deep Learning Models Resistant to Adversarial Attacks},
  author={Aleksander Madry and Aleksandar Makelov and Ludwig Schmidt and Dimitris Tsipras and Adrian Vladu},
  journal={CoRR},
  year={2017},
  volume={abs/1706.06083}
}
Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a… CONTINUE READING
Highly Influential
This paper has highly influenced 113 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 467 citations. REVIEW CITATIONS

Citations

Publications citing this paper.
Showing 1-10 of 332 extracted citations

467 Citations

02004002016201720182019
Citations per Year
Semantic Scholar estimates that this publication has 467 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 26 references

Contributions to the theory of statistical estimation and testing hypotheses

Abraham Wald
The Annals of Mathematical Statistics, • 1939
View 2 Excerpts
Highly Influenced

Towards Robust Deep Neural Networks with BANG

2018 IEEE Winter Conference on Applications of Computer Vision (WACV) • 2018
View 1 Excerpt

Similar Papers

Loading similar papers…