Model Uncertainty for Adversarial Examples using Dropouts

Abstract

An image can undergo a visually imperceptible change and yet get confidently misclassified by a trained Neural Network. Puzzled by this counter-intuitive behaviour, a lot of research has been undertaken in search of answers for this inexplicable phenomenon and more importantly, a possibility to impart robustness against adversarial misclassification. This… (More)

Topics

12 Figures and Tables

Cite this paper

@inproceedings{Rawat2016ModelUF, title={Model Uncertainty for Adversarial Examples using Dropouts}, author={Ambrish Rawat}, year={2016} }