Adversarial attacks hidden in plain sight

@article{Gpfert2019AdversarialAH,
  title={Adversarial attacks hidden in plain sight},
  author={Jan Philip G{\"o}pfert and Heiko Wersing and B. Hammer},
  journal={ArXiv},
  year={2019},
  volume={abs/1902.09286}
}
  • Jan Philip Göpfert, Heiko Wersing, B. Hammer
  • Published 2019
  • Mathematics, Computer Science
  • ArXiv
  • Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue. Adversarial examples are designed to deliberately fool neural networks into making any desired incorrect classification, potentially with very high certainty. Several defensive approaches increase robustness against adversarial attacks, demanding attacks of greater magnitude, which lead to visible artifacts. By considering human visual… CONTINUE READING
    5 Citations

    Figures, Tables, and Topics from this paper

    Recovering Localized Adversarial Attacks
    • PDF
    Adversarial examples and where to find them
    Adversarial Robustness of Quantum Machine Learning Models
    • PDF
    Adversarial Robustness Curves
    • 3
    • PDF
    Interpreting Fine-Grained Dermatological Classification by Deep Learning
    • 8
    • PDF

    References

    SHOWING 1-10 OF 47 REFERENCES
    On Detecting Adversarial Perturbations
    • 497
    • PDF
    Towards Deep Learning Models Resistant to Adversarial Attacks
    • 2,776
    • PDF
    Detecting Adversarial Samples from Artifacts
    • 417
    • PDF
    One Pixel Attack for Fooling Deep Neural Networks
    • 817
    • PDF
    Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
    • 1,516
    • PDF
    Detecting Adversarial Examples through Nonlinear Dimensionality Reduction.
    • 10
    Adversarial Examples that Fool both Computer Vision and Time-Limited Humans
    • 127
    • PDF
    Adversarial Attacks and Defences: A Survey
    • 143
    • PDF
    Towards Evaluating the Robustness of Neural Networks
    • 2,943
    • PDF