Wild patterns: Ten years after the rise of adversarial machine learning

@article{Biggio2018WildPT,
  title={Wild patterns: Ten years after the rise of adversarial machine learning},
  author={B. Biggio and F. Roli},
  journal={Pattern Recognit.},
  year={2018},
  volume={84},
  pages={317-331}
}
Abstract Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures… Expand
90 Citations
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
  • 417
The security of machine learning in an adversarial setting: A survey
  • 35
Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods
  • 4
  • PDF
FADER: Fast Adversarial Example Rejection
  • 1
  • PDF
Ensemble adversarial black-box attacks against deep learning systems
  • 5
Fuzzy classification boundaries against adversarial network attacks
  • 2
Vulnerability of classifiers to evolutionary generated adversarial examples
  • 3
Boosting the Transferability of Adversarial Samples via Attention
  • W. Wu, Yuxin Su, +4 authors Yu-Wing Tai
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
  • 5
  • PDF
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 143 REFERENCES
Towards Deep Learning Models Resistant to Adversarial Attacks
  • 3,092
  • PDF
The Limitations of Deep Learning in Adversarial Settings
  • 1,967
  • PDF
On Security and Sparsity of Linear Classifiers for Adversarial Settings
  • 22
  • PDF
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
  • 1,608
  • PDF
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
  • 213
  • PDF
Practical Black-Box Attacks against Machine Learning
  • 1,623
  • Highly Influential
  • PDF
Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics
  • Xin Li, F. Li
  • Computer Science
  • 2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
  • 208
  • PDF
Towards Evaluating the Robustness of Neural Networks
  • 3,166
  • PDF
Secure Kernel Machines against Evasion Attacks
  • 51
...
1
2
3
4
5
...