Towards Deep Learning Models Resistant to Adversarial Attacks

@article{Madry2017TowardsDL,
  title={Towards Deep Learning Models Resistant to Adversarial Attacks},
  author={Aleksander Madry and Aleksandar Makelov and Ludwig Schmidt and Dimitris Tsipras and Adrian Vladu},
  journal={CoRR},
  year={2017},
  volume={abs/1706.06083}
}
Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a… CONTINUE READING
Highly Influential
This paper has highly influenced 115 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 464 citations. REVIEW CITATIONS

Citations

Publications citing this paper.

464 Citations

0200400201620172018
Citations per Year
Semantic Scholar estimates that this publication has 464 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 26 references

Contributions to the theory of statistical estimation and testing hypotheses

  • Abraham Wald
  • The Annals of Mathematical Statistics,
  • 1939
Highly Influential
2 Excerpts

Similar Papers

Loading similar papers…