Adversarial examples from computational constraints

  title={Adversarial examples from computational constraints},
  author={S{\'e}bastien Bubeck and Eric Price and Ilya P. Razenshteyn},
Why are classifiers in high dimension vulnerable to “adversarial” perturbations? We show that it is likely not due to information theoretic limitations, but rather it could be due to computational constraints. First we prove that, for a broad set of classification tasks, the mere existence of a robust classifier implies that it can be found by a possibly exponential-time algorithm with relatively few training examples. Then we give a particular classification task where learning a robust… CONTINUE READING
Recent Discussions
This paper has been referenced on Twitter 100 times over the past 90 days. VIEW TWEETS


Publications referenced by this paper.

Similar Papers

Loading similar papers…