Defensive Distillation is Not Robust to Adversarial Examples

@article{Carlini2016DefensiveDI,
  title={Defensive Distillation is Not Robust to Adversarial Examples},
  author={Nicholas Carlini and David A. Wagner},
  journal={CoRR},
  year={2016},
  volume={abs/1607.04311}
}
We show that defensive distillation is not secure: it is no more resistant to targeted misclassification attacks than unprotected neural networks. 
Recent Discussions
This paper has been referenced on Twitter 11 times over the past 90 days. VIEW TWEETS

From This Paper

Figures, tables, and topics from this paper.

References

Publications referenced by this paper.

Similar Papers

Loading similar papers…