Corpus ID: 236447879

Adversarial Attacks with Time-Scale Representations

@article{SantamaraPang2021AdversarialAW,
  title={Adversarial Attacks with Time-Scale Representations},
  author={Alberto Santamar{\'i}a-Pang and Jianwei Qiu and Aritra Chowdhury and James R. Kubricht and Peter H. Tu and Iyer Naresh and Nurali Virani},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.12473}
}
We propose a novel framework for real-time black-box universal attacks which disrupts activations of early convolutional layers in deep learning models. Our hypothesis is that perturbations produced in the wavelet space disrupt early convolutional layers more effectively than perturbations performed in the time domain. The main challenge in adversarial attacks is to preserve low frequency image content while minimally changing the most meaningful high frequency content. To address this, we… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 52 REFERENCES
Generative Adversarial Perturbations
TLDR
Novel generative models for creating adversarial examples, slightly perturbed images resembling natural images but maliciously crafted to fool pre-trained models, obviating the need for hand-crafting attack methods for each task are proposed. Expand
One Pixel Attack for Fooling Deep Neural Networks
TLDR
This paper proposes a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE), which requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE. Expand
Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers
TLDR
This paper proposes a novel approach to generate semantic adversarial examples by optimizing a particular adversarial loss over the range-space of a parametric conditional generative model, and demonstrates implementations of this approach on binary classifiers trained on face images. Expand
ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples
TLDR
An end-to-end image compression model to defend adversarial examples: ComDefend, which outperforms the state-of-the-art defense methods, and is consistently effective to protect classifiers against adversarial attacks. Expand
Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses
TLDR
It is shown that a simple universal perturbation can fool a series of state-of-the-art defenses, and it is verified that regionally homogeneous perturbations can well transfer across different vision tasks. Expand
Universal Adversarial Training
TLDR
This work proposes universal adversarial training, which models the problem of robust classifier generation as a two-player min-max game, and produces robust models with only 2X the cost of natural training. Expand
Universal Adversarial Perturbations
TLDR
The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers and outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images. Expand
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
TLDR
An effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN is proposed, sparing the need for training substitute models and avoiding the loss in attack transferability. Expand
Fast Feature Fool: A data independent approach to universal adversarial perturbations
TLDR
This paper proposes a novel data independent approach to generate image agnostic perturbations for a range of CNNs trained for object recognition and shows that these perturbation are transferable across multiple network architectures trained either on same or different data. Expand
Deflecting Adversarial Attacks with Pixel Deflection
TLDR
This paper presents an algorithm to process an image so that classification accuracy is significantly preserved in the presence of adversarial manipulations, and demonstrates experimentally that the combination of these techniques enables the effective recovery of the true class, against a variety of robust attacks. Expand
...
1
2
3
4
5
...