Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation

@inproceedings{Hein2017FormalGO,
  title={Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation},
  author={Matthias Hein and Maksym Andriushchenko},
  booktitle={NIPS},
  year={2017}
}
Recent work has shown that state-of-the-art classifiers are quite brittle, in the sense that a small adversarial change of an originally with high confidence correctly classified input leads to a wrong classification again with high confidence. This raises concerns that such classifiers are vulnerable to attacks and calls into question their usage in safety-critical systems. We show in this paper for the first time formal guarantees on the robustness of a classifier by giving instance-specific… CONTINUE READING
Highly Cited
This paper has 49 citations. REVIEW CITATIONS
Recent Discussions
This paper has been referenced on Twitter 31 times over the past 90 days. VIEW TWEETS

From This Paper

Figures, tables, and topics from this paper.
38 Citations
27 References
Similar Papers

Citations

Publications citing this paper.
Showing 1-10 of 38 extracted citations

References

Publications referenced by this paper.

Similar Papers

Loading similar papers…