Corpus ID: 235899333

GGT: Graph-Guided Testing for Adversarial Sample Detection of Deep Neural Network

@article{Chen2021GGTGT,
  title={GGT: Graph-Guided Testing for Adversarial Sample Detection of Deep Neural Network},
  author={Zuohui Chen and Ren Wang and Jingyang Xiang and Yue Yu and Xin Xia and Shouling Ji and Qi Xuan and Xiaoniu Yang},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.07043}
}
Deep Neural Networks (DNN) are known to be vulnerable to adversarial samples, the detection of which is crucial for the wide application of these DNN models. Recently, a number of deep testing methods in software engineering were proposed to find the vulnerability of DNN systems, and one of them, i.e., Model Mutation Testing (MMT), was used to successfully detect various adversarial samples generated by different kinds of adversarial attacks. However, the mutated models in MMT are always huge… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 48 REFERENCES
Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing
TLDR
This work proposes a measure of 'sensitivity' and shows empirically that normal samples and adversarial samples have distinguishable sensitivity, and integrates statistical hypothesis testing and model mutation testing to check whether an input sample is likely to be normal or adversarial at runtime by measuring its sensitivity. Expand
NIC: Detecting Adversarial Samples with Neural Network Invariant Checking
TLDR
This paper analyzes the internals of DNN models under various attacks and identifies two common exploitation channels: the provenance channel and the activation value distribution channel, and proposes a novel technique to extract DNN invariants and use them to perform runtime adversarial sample detection. Expand
Adversarial Example Detection and Classification With Asymmetrical Adversarial Training
TLDR
This paper presents an adversarial example detection method that provides performance guarantee to norm constrained adversaries, and uses the learned class conditional generative models to define generative detection/classification models that are both robust and more interpretable. Expand
Adversarial Attacks on Deep-learning Models in Natural Language Processing
TLDR
A systematic survey on preliminary knowledge of NLP and related seminal works in computer vision is presented, which collects all related academic works since the first appearance in 2017 and analyzes 40 representative works in a comprehensive way. Expand
One Pixel Attack for Fooling Deep Neural Networks
TLDR
This paper proposes a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE), which requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE. Expand
The Limitations of Deep Learning in Adversarial Settings
TLDR
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. Expand
Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty
TLDR
A large-scale study into the capability of multiple uncertainty metrics in differentiating benign examples (BEs) and AEs, which enables to characterize the uncertainty patterns of input data and proposes an automated testing technique to generate multiple types of uncommon AEs and BEs that are largely missed by existing techniques. Expand
Simple Black-Box Adversarial Attacks on Deep Neural Networks
TLDR
This work focuses on deep convolutional neural networks and demonstrates that adversaries can easily craft adversarial examples even without any internal knowledge of the target network, and proposes schemes that could serve as a litmus test for designing robust networks. Expand
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced. Expand
Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors
TLDR
This paper shows that adversarial generalization for standard training requires more data than standard generalization, and uncovers the global relationship between generalization and robustness with respect to the data size especially when data is augmented by generative models. Expand
...
1
2
3
4
5
...