Model Uncertainty for Adversarial Examples using Dropouts

@inproceedings{Rawat2016ModelUF,
  title={Model Uncertainty for Adversarial Examples using Dropouts},
  author={Ambrish Rawat},
  year={2016}
}
An image can undergo a visually imperceptible change and yet get confidently misclassified by a trained Neural Network. Puzzled by this counter-intuitive behaviour, a lot of research has been undertaken in search of answers for this inexplicable phenomenon and more importantly, a possibility to impart robustness against adversarial misclassification. This thesis is a first step in the direction of investigating the effect of this adversarial misclassification on Bayesian Neural Networks. With… CONTINUE READING

From This Paper

Figures, tables, and topics from this paper.