Enhancing Resilience of Deep Learning Networks By Means of Transferable Adversaries

@article{Seiler2020EnhancingRO,
  title={Enhancing Resilience of Deep Learning Networks By Means of Transferable Adversaries},
  author={M. Seiler and H. Trautmann and P. Kerschke},
  journal={2020 International Joint Conference on Neural Networks (IJCNN)},
  year={2020},
  pages={1-8}
}
Artificial neural networks in general and deep learning networks in particular established themselves as popular and powerful machine learning algorithms. While the often tremendous sizes of these networks are beneficial when solving complex tasks, the tremendous number of parameters also causes such networks to be vulnerable to malicious behavior such as adversarial perturbations. These perturbations can change a model’s classification decision. Moreover, while single-step adversaries can… Expand

Figures, Tables, and Topics from this paper

References

SHOWING 1-10 OF 31 REFERENCES
Towards Deep Learning Models Resistant to Adversarial Attacks
  • 3,251
  • Highly Influential
  • PDF
Towards Deep Neural Network Architectures Robust to Adversarial Examples
  • 524
  • PDF
Boosting Adversarial Attacks with Momentum
  • Y. Dong, Fangzhou Liao, +4 authors J. Li
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
  • 712
  • Highly Influential
  • PDF
Towards Evaluating the Robustness of Neural Networks
  • 3,305
  • PDF
Adversarial Machine Learning at Scale
  • 1,366
  • Highly Influential
  • PDF
Ensemble Adversarial Training: Attacks and Defenses
  • 1,217
  • PDF
The Space of Transferable Adversarial Examples
  • 295
  • PDF
Explaining and Harnessing Adversarial Examples
  • 7,032
  • Highly Influential
  • PDF
Practical Black-Box Attacks against Machine Learning
  • 1,692
  • PDF
Adversarial examples in the physical world
  • 2,428
  • PDF
...
1
2
3
4
...