Generalizing Universal Adversarial Attacks Beyond Additive Perturbations

@article{Zhang2020GeneralizingUA,
  title={Generalizing Universal Adversarial Attacks Beyond Additive Perturbations},
  author={Yanghao Zhang and Wenjie Ruan and Fu Lee Wang and Xiaowei Huang},
  journal={2020 IEEE International Conference on Data Mining (ICDM)},
  year={2020},
  pages={1412-1417}
}
The previous study has shown that universal adversarial attacks can fool deep neural networks over a large set of input images with a single human-invisible perturbation. However, current methods for universal adversarial attacks are based on additive perturbation, which cause misclassification when the perturbation is directly added to the input images. In this paper, for the first time, we show that a universal adversarial attack can also be achieved via non-additive perturbation (e.g… Expand
Fooling Object Detectors: Adversarial Attacks by Half-Neighbor Masks
TLDR
This paper proposes a Half-Neighbor Masked Projected Gradient Descent based attack, which can generate strong perturbation to fool different kinds of detectors under strict constraints. Expand
Towards the Quantification of Safety Risks in Deep Neural Networks
TLDR
An algorithm is developed, inspired by derivative-free optimization techniques and accelerated by tensor-based parallelization on GPUs, to support efficient computation of the metrics of safety risks and can achieve competitive performance on safety quantification in terms of the tightness and the efficiency of computation. Expand
Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications
TLDR
This tutorial aims to provide a comprehensive overall picture about this emerging direction and enable the community to be aware of the urgency and importance of designing robust deep learning models in safety-critical data analytical applications, ultimately enabling the end-users to trust deep learning classifiers. Expand
Tutorials on Testing Neural Networks
TLDR
This tutorial is to go through the major functionalities of the tools with a few running examples, to exhibit how the developed techniques work, what the results are, and how to interpret them. Expand

References

SHOWING 1-10 OF 70 REFERENCES
NAG: Network for Adversary Generation
TLDR
Perturbations crafted by the proposed generative approach to model the distribution of adversarial perturbations achieve state-of-the-art fooling rates, exhibit wide variety and deliver excellent cross model generalizability. Expand
Generation of Low Distortion Adversarial Attacks via Convex Programming
TLDR
This paper presents an innovative method which generates adversarial examples via convex programming which can generate adversarialExamples with lower distortion and higher transferability than the C&W attack, which is the current state-of-the-art adversarial attack method for DNNs. Expand
Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations
TLDR
This paper presents a novel, generalizable and data-free approach for crafting universal adversarial perturbations, and shows that the current deep learning models are now at an increased risk, since the objective generalizes across multiple tasks without the requirement of training data for crafting the perturbation. Expand
Generating Adversarial Examples with Adversarial Networks
TLDR
AdvGAN is proposed to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances, and has high attack success rate under state-of-the-art defenses compared to other attacks. Expand
Fast Feature Fool: A data independent approach to universal adversarial perturbations
TLDR
This paper proposes a novel data independent approach to generate image agnostic perturbations for a range of CNNs trained for object recognition and shows that these perturbation are transferable across multiple network architectures trained either on same or different data. Expand
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
Generative Adversarial Perturbations
TLDR
Novel generative models for creating adversarial examples, slightly perturbed images resembling natural images but maliciously crafted to fool pre-trained models, obviating the need for hand-crafting attack methods for each task are proposed. Expand
Universal Adversarial Perturbations
TLDR
The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers and outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images. Expand
Learning Universal Adversarial Perturbations with Generative Models
TLDR
This work introduces universal adversarial networks, a generative network that is capable of fooling a target classifier when it's generated output is added to a clean sample from a dataset. Expand
Spatially Transformed Adversarial Examples
TLDR
Perturbations generated through spatial transformation could result in large $\mathcal{L}_p$ distance measures, but the extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems. Expand
...
1
2
3
4
5
...