Anomaly detection of adversarial examples using class-conditional generative adversarial networks

@article{Wang2021AnomalyDO,
  title={Anomaly detection of adversarial examples using class-conditional generative adversarial networks},
  author={Hang Wang and David J. Miller and George Kesidis},
  journal={Comput. Secur.},
  year={2021},
  volume={124},
  pages={102956}
}

Improving Adversarial Robustness with Hypersphere Embedding and Angular-based Regularizations

This paper adds regularization terms to AT that explicitly enforce weight-feature compactness and inter-class separation; all expressed in terms of angular features, and shows that angular-AT further improves adversarial robustness.

References

SHOWING 1-10 OF 68 REFERENCES

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Explaining and Harnessing Adversarial Examples

It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.

ImageNet: A large-scale hierarchical image database

A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.

Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-Transform Domain

This work reveals that normal examples are insensitive to the fluctuations occurring at the highly-curved region of the decision boundary, while AEs typically designed over one single domain exhibit exorbitant sensitivity on such fluctuations, and designs another classifier with transformed decision boundary to detect AEs.

Detecting Adversarial Samples from Artifacts

This paper investigates model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model, and results show a method for implicit adversarial detection that is oblivious to the attack algorithm.

When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time

A purely unsupervised anomaly detector that models the joint density of a deep layer using highly suitable null hypothesis density models and exploits multiple DNN layers, and leverages a source and destination class concept, source class uncertainty, the class confusion matrix, and DNN weight information in constructing a novel decision statistic grounded in the Kullback-Leibler divergence is proposed.

Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training

This paper identifies that gradient exploding in the classifier can cause an undesirable collapse in early training, and projects input vectors onto a unit hypersphere can resolve the problem, and proposes the Data-to-Data Cross-Entropy loss (D2D-CE) to exploit relational information in theclass-labeled dataset.

The Odds are Odd: A Statistical Test for Detecting Adversarial Examples

This work investigates conditions under which test statistics exist that can reliably detect examples, which have been adversarially manipulated in a white-box attack and shows that it is even possible to correct test time predictions for adversarial attacks with high accuracy.

Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods

It is concluded that adversarialExamples are significantly harder to detect than previously appreciated, and the properties believed to be intrinsic to adversarial examples are in fact not.
...