Adversarial attacks hidden in plain sight

@article{Gpfert2019AdversarialAH,
  title={Adversarial attacks hidden in plain sight},
  author={Jan Philip G{\"o}pfert and Heiko Wersing and Barbara Hammer},
  journal={ArXiv},
  year={2019},
  volume={abs/1902.09286}
}
Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue. Adversarial examples are designed to deliberately fool neural networks into making any desired incorrect classification, potentially with very high certainty. Several defensive approaches increase robustness against adversarial attacks, demanding attacks of greater magnitude, which lead to visible artifacts. By considering human visual… CONTINUE READING
3
Twitter Mentions

Figures, Tables, and Topics from this paper.

Citations

Publications citing this paper.

References

Publications referenced by this paper.
SHOWING 1-10 OF 44 REFERENCES