Explaining and Harnessing Adversarial Examples

  title={Explaining and Harnessing Adversarial Examples},
  author={Ian J. Goodfellow and Jonathon Shlens and Christian Szegedy},
Several machine learning models, including neural networks, consistently misclassify adversarial examples—inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks’ vulnerability to adversarial… CONTINUE READING
Highly Influential
This paper has highly influenced 452 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 1,709 citations. REVIEW CITATIONS
1,153 Citations
19 References
Similar Papers


Publications citing this paper.
Showing 1-10 of 1,153 extracted citations

1,710 Citations

Citations per Year
Semantic Scholar estimates that this publication has 1,710 citations based on the available data.

See our FAQ for additional information.

Similar Papers

Loading similar papers…