Explaining and Harnessing Adversarial Examples

@article{Goodfellow2015ExplainingAH,
  title={Explaining and Harnessing Adversarial Examples},
  author={Ian J. Goodfellow and Jonathon Shlens and Christian Szegedy},
  journal={CoRR},
  year={2015},
  volume={abs/1412.6572}
}
Several machine learning models, including neural networks, consistently misclassify adversarial examples—inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks’ vulnerability to adversarial… CONTINUE READING

From This Paper

Figures, tables, and topics from this paper.

Citations

Publications citing this paper.
SHOWING 1-10 OF 1,657 CITATIONS, ESTIMATED 41% COVERAGE

1,657 Citations

050020152016201720182019
Citations per Year
Semantic Scholar estimates that this publication has 1,657 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
SHOWING 1-10 OF 18 REFERENCES

Intriguing properties of neural networks

  • Szegedy, Christian, +11 authors Rob
  • ICLR, abs/1312.6199,
  • 2014
Highly Influential
4 Excerpts

Similar Papers

Loading similar papers…