Wild patterns: Ten years after the rise of adversarial machine learning
@article{Biggio2018WildPT, title={Wild patterns: Ten years after the rise of adversarial machine learning}, author={B. Biggio and F. Roli}, journal={Pattern Recognit.}, year={2018}, volume={84}, pages={317-331} }
Abstract Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures… Expand
Figures and Topics from this paper
Paper Mentions
90 Citations
The security of machine learning in an adversarial setting: A survey
- Computer Science
- J. Parallel Distributed Comput.
- 2019
- 35
Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods
- Computer Science, Mathematics
- ArXiv
- 2019
- 4
- PDF
Challenging the Adversarial Robustness of DNNs Based on Error-Correcting Output Codes
- Computer Science
- Secur. Commun. Networks
- 2020
- PDF
Ensemble adversarial black-box attacks against deep learning systems
- Computer Science
- Pattern Recognit.
- 2020
- 5
Fuzzy classification boundaries against adversarial network attacks
- Mathematics, Computer Science
- Fuzzy Sets Syst.
- 2019
- 2
Vulnerability of classifiers to evolutionary generated adversarial examples
- Computer Science, Medicine
- Neural Networks
- 2020
- 3
Query-efficient label-only attacks against black-box machine learning models
- Computer Science
- Comput. Secur.
- 2020
- 2
Boosting the Transferability of Adversarial Samples via Attention
- Computer Science
- 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
- 5
- PDF
References
SHOWING 1-10 OF 143 REFERENCES
Towards Deep Learning Models Resistant to Adversarial Attacks
- Computer Science, Mathematics
- ICLR
- 2018
- 3,092
- PDF
The Limitations of Deep Learning in Adversarial Settings
- Computer Science, Mathematics
- 2016 IEEE European Symposium on Security and Privacy (EuroS&P)
- 2016
- 1,967
- PDF
On Security and Sparsity of Linear Classifiers for Adversarial Settings
- Computer Science
- S+SSPR
- 2016
- 22
- PDF
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
- Computer Science, Mathematics
- AISec@CCS
- 2017
- 595
- PDF
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
- Computer Science, Mathematics
- 2016 IEEE Symposium on Security and Privacy (SP)
- 2016
- 1,608
- PDF
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
- Computer Science, Mathematics
- AISec@CCS
- 2017
- 213
- PDF
Practical Black-Box Attacks against Machine Learning
- Computer Science
- AsiaCCS
- 2017
- 1,623
- Highly Influential
- PDF
Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics
- Computer Science
- 2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
- 208
- PDF
Towards Evaluating the Robustness of Neural Networks
- Computer Science
- 2017 IEEE Symposium on Security and Privacy (SP)
- 2017
- 3,166
- PDF