Wild patterns: Ten years after the rise of adversarial machine learning

@article{Biggio2018WildPT,
  title={Wild patterns: Ten years after the rise of adversarial machine learning},
  author={Battista Biggio and Fabio Roli},
  journal={Pattern Recognit.},
  year={2018},
  volume={84},
  pages={317-331}
}
Abstract Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures… Expand
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
TLDR
This tutorial introduces the fundamentals of adversarial machine learning to the security community, and presents novel techniques that have been recently proposed to assess performance of pattern classifiers and deep learning algorithms under attack, evaluate their vulnerabilities, and implement defense strategies that make learning algorithms more robust to attacks. Expand
The security of machine learning in an adversarial setting: A survey
TLDR
This work presents a comprehensive overview of the investigation of the security properties of ML algorithms under adversarial settings, and analyze the ML security model to develop a blueprint for this interdisciplinary research area. Expand
Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods
TLDR
This review paper aims to target new researchers in the cybersecurity domain who may seek to acquire some basic knowledge on the machine learning and deep learning models and algorithms, as well as some of the relevant adversarial security attacks and perturbations. Expand
Adversarial Machine Learning for Cybersecurity and Computer Vision: Current Developments and Challenges
  • B. Xi
  • Computer Science
  • ArXiv
  • 2021
TLDR
A comprehensive overview of adversarial machine learning focusing on two application domains, i.e., cybersecurity and computer vision, and discusses three main categories of attacks against machine learning techniques – poisoning attacks, evasion attacks, and privacy attacks. Expand
Analysis of Security of Machine Learning and a proposition of assessment pattern to deal with adversarial attacks
TLDR
A taxonomy that will help to understand and analyze the security of machine learning models is overview, and common methods that were advanced to protect systems built on Machine learning models from adversaries are analyzed. Expand
FADER: Fast Adversarial Example Rejection
TLDR
FADER, a novel technique for speeding up detection-based methods, is introduced, which addresses the issues above by employing RBF networks as detectors and fixing the number of required prototypes, the runtime complexity of adversarial examples detectors can be controlled. Expand
Ensemble adversarial black-box attacks against deep learning systems
TLDR
Experimental results show that proposed ensemble adversarial black-box attack strategies can successfully attack the DL system with some defense mechanism, such as adversarial training and ensemble adversaria training, and the greater the diversity in substitute ensembles enables stronger transferability. Expand
Fuzzy classification boundaries against adversarial network attacks
TLDR
This paper proposes to blur classification boundaries in order to enhance machine learning robustness and improve the detection of adversarial samples that exploit learning weaknesses. Expand
Vulnerability of classifiers to evolutionary generated adversarial examples
TLDR
An evolutionary algorithm is proposed that can generate adversarial examples for any machine learning model in the black-box attack scenario and can be found without access to model's parameters, only by querying the model at hand. Expand
Theoretical Investigation of Generalization Bounds for Adversarial Learning of Deep Neural Networks
Recent studies have shown that many machine learning models are vulnerable to adversarial attacks. Much remains unknown concerning the generalization error of deep neural networks (DNNs) forExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 143 REFERENCES
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
On Security and Sparsity of Linear Classifiers for Adversarial Settings
TLDR
This work focuses on the vulnerability of linear classifiers to evasion attacks, and proposes a novel octagonal regularizer that can improve classifier security and sparsity in real-world application examples including spam and malware detection. Expand
The Limitations of Deep Learning in Adversarial Settings
TLDR
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. Expand
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
TLDR
An effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN is proposed, sparing the need for training substitute models and avoiding the loss in attack transferability. Expand
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
TLDR
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs. Expand
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
TLDR
This work proposes a novel poisoning algorithm based on the idea of back-gradient optimization, able to target a wider class of learning algorithms, trained with gradient-based procedures, including neural networks and deep learning architectures, and empirically evaluates its effectiveness on several application examples. Expand
Practical Black-Box Attacks against Machine Learning
TLDR
This work introduces the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge, and finds that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder. Expand
Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics
  • Xin Li, Fuxin Li
  • Computer Science
  • 2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
TLDR
After detecting adversarial examples, it is shown that many of them can be recovered by simply performing a small average filter on the image, which should lead to more insights about the classification mechanisms in deep convolutional neural networks. Expand
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced. Expand
Secure Kernel Machines against Evasion Attacks
TLDR
This work aims to develop secure kernel machines against evasion attacks that are not computationally more demanding than their non-secure counterparts, and discusses the security of nonlinear kernel machines, and shows that a proper choice of the kernel function is crucial. Expand
...
1
2
3
4
5
...