Defensive Distillation is Not Robust to Adversarial Examples

  title={Defensive Distillation is Not Robust to Adversarial Examples},
  author={Nicholas Carlini and David A. Wagner},
We show that defensive distillation is not secure: it is no more resistant to targeted misclassification attacks than unprotected neural networks. 
Recent Discussions
This paper has been referenced on Twitter 11 times over the past 90 days. VIEW TWEETS

From This Paper

Figures, tables, and topics from this paper.


Publications referenced by this paper.

Similar Papers

Loading similar papers…