Analyzing the Robustness of Nearest Neighbors to Adversarial Examples

@inproceedings{Wang2018AnalyzingTR,
  title={Analyzing the Robustness of Nearest Neighbors to Adversarial Examples},
  author={Yizhen Wang and Somesh Jha and Kamalika Chaudhuri},
  booktitle={ICML},
  year={2018}
}
Motivated by safety-critical applications, test-time attacks on classifiers via adversarial examples has recently received a great deal of attention. However, there is a general lack of understanding on why adversarial examples arise; whether they originate due to inherent properties of data or due to lack of training samples remains ill-understood. In this work, we introduce a theoretical framework analogous to bias-variance theory for understanding these effects. We use our framework to… CONTINUE READING
Recent Discussions
This paper has been referenced on Twitter 60 times over the past 90 days. VIEW TWEETS
11 Citations
25 References
Similar Papers

Citations

Publications citing this paper.
Showing 1-10 of 11 extracted citations

References

Publications referenced by this paper.
Showing 1-10 of 25 references

Hongseok Namkoong

  • J. D. Aman Sinha
  • Certifiable distributional robustness with…
  • 2018
1 Excerpt

A

  • N. Papernot, N. Carlini, +7 authors R. Sheatsley
  • Garg, and Y.-C. Lin. cleverhans v2.0.0: an…
  • 2017
1 Excerpt

Similar Papers

Loading similar papers…