Corpus ID: 212717703

When are Non-Parametric Methods Robust?

@inproceedings{Bhattacharjee2020WhenAN,
  title={When are Non-Parametric Methods Robust?},
  author={Robi Bhattacharjee and K. Chaudhuri},
  booktitle={ICML},
  year={2020}
}
A growing body of research has shown that many classifiers are susceptible to {\em{adversarial examples}} -- small strategic modifications to test inputs that lead to misclassification. In this work, we study general non-parametric methods, with a view towards understanding when they are robust to these modifications. We establish general conditions under which non-parametric methods are r-consistent -- in the sense that they converge to optimally robust and accurate classifiers in the large… Expand
7 Citations
Sample Complexity of Adversarially Robust Linear Classification on Separated Data
  • PDF
A Closer Look at Accuracy vs. Robustness
  • 17
  • PDF
Understanding Generalization in Adversarial Training via the Bias-Variance Decomposition
  • PDF
Statistical Guarantees of Distributed Nearest Neighbor Classification
  • PDF

References

SHOWING 1-10 OF 35 REFERENCES
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
  • 91
  • PDF
Adversarially Robust Generalization Requires More Data
  • 359
  • Highly Influential
  • PDF
The vulnerability of learning to adversarial perturbation increases with intrinsic dimensionality
  • 39
  • PDF
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
  • 166
  • PDF
Certified Defenses against Adversarial Examples
  • 513
  • PDF
On the Robustness of Deep K-Nearest Neighbors
  • 29
  • PDF
Towards Evaluating the Robustness of Neural Networks
  • 3,309
  • PDF
Robust Decision Trees Against Adversarial Examples
  • 40
  • PDF
Explaining and Harnessing Adversarial Examples
  • 7,039
  • PDF
...
1
2
3
4
...