Defensive Distillation is Not Robust to Adversarial Examples

@article{Carlini2016DefensiveDI,
  title={Defensive Distillation is Not Robust to Adversarial Examples},
  author={Nicholas Carlini and David A. Wagner},
  journal={CoRR},
  year={2016},
  volume={abs/1607.04311}
}
We show that defensive distillation is not secure: it is no more resistant to targeted misclassification attacks than unprotected neural networks. 
Tweets
This paper has been referenced on Twitter 11 times. VIEW TWEETS

From This Paper

Figures, tables, and topics from this paper.

Explore Further: Topics Discussed in This Paper

Citations

Publications citing this paper.

203 Citations

0501002016201720182019
Citations per Year
Semantic Scholar estimates that this publication has 203 citations based on the available data.

See our FAQ for additional information.

Similar Papers

Loading similar papers…