Anomaly detection of adversarial examples using class-conditional generative adversarial networks
@article{Wang2021AnomalyDO, title={Anomaly detection of adversarial examples using class-conditional generative adversarial networks}, author={Hang Wang and David J. Miller and George Kesidis}, journal={Comput. Secur.}, year={2021}, volume={124}, pages={102956} }
Figures and Tables from this paper
One Citation
Improving Adversarial Robustness with Hypersphere Embedding and Angular-based Regularizations
- Computer ScienceArXiv
- 2023
This paper adds regularization terms to AT that explicitly enforce weight-feature compactness and inter-class separation; all expressed in terms of angular features, and shows that angular-AT further improves adversarial robustness.
References
SHOWING 1-10 OF 68 REFERENCES
Deep Residual Learning for Image Recognition
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Explaining and Harnessing Adversarial Examples
- Computer ScienceICLR
- 2015
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
ImageNet: A large-scale hierarchical image database
- Computer Science2009 IEEE Conference on Computer Vision and Pattern Recognition
- 2009
A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-Transform Domain
- Computer ScienceAAAI
- 2021
This work reveals that normal examples are insensitive to the fluctuations occurring at the highly-curved region of the decision boundary, while AEs typically designed over one single domain exhibit exorbitant sensitivity on such fluctuations, and designs another classifier with transformed decision boundary to detect AEs.
Detecting Adversarial Samples from Artifacts
- Computer ScienceArXiv
- 2017
This paper investigates model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model, and results show a method for implicit adversarial detection that is oblivious to the attack algorithm.
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time
- Computer ScienceNeural Computation
- 2019
A purely unsupervised anomaly detector that models the joint density of a deep layer using highly suitable null hypothesis density models and exploits multiple DNN layers, and leverages a source and destination class concept, source class uncertainty, the class confusion matrix, and DNN weight information in constructing a novel decision statistic grounded in the Kullback-Leibler divergence is proposed.
Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training
- Computer ScienceNeurIPS
- 2021
This paper identifies that gradient exploding in the classifier can cause an undesirable collapse in early training, and projects input vectors onto a unit hypersphere can resolve the problem, and proposes the Data-to-Data Cross-Entropy loss (D2D-CE) to exploit relational information in theclass-labeled dataset.
f‐AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks
- Computer ScienceMedical Image Anal.
- 2019
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
- Computer Science, MathematicsICML
- 2019
This work investigates conditions under which test statistics exist that can reliably detect examples, which have been adversarially manipulated in a white-box attack and shows that it is even possible to correct test time predictions for adversarial attacks with high accuracy.
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
- Computer ScienceAISec@CCS
- 2017
It is concluded that adversarialExamples are significantly harder to detect than previously appreciated, and the properties believed to be intrinsic to adversarial examples are in fact not.