Towards Deep Learning Models Resistant to Adversarial Attacks

@article{Madry2018TowardsDL,
  title={Towards Deep Learning Models Resistant to Adversarial Attacks},
  author={Aleksander Madry and Aleksandar Makelov and Ludwig Schmidt and Dimitris Tsipras and Adrian Vladu},
  journal={CoRR},
  year={2018},
  volume={abs/1706.06083}
}
Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 502 CITATIONS, ESTIMATED 39% COVERAGE

FILTER CITATIONS BY YEAR

2016
2019

CITATION STATISTICS

  • 201 Highly Influenced Citations

  • Averaged 115 Citations per year over the last 3 years

  • 89% Increase in citations per year in 2018 over 2017

References

Publications referenced by this paper.

Similar Papers

Loading similar papers…