Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs

@article{Hu2022AdversarialZL,
  title={Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs},
  author={Chen-Hao Hu and Weiwen Shi},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.12251}
}
Although deep neural networks (DNNs) are known to be fragile, no one has studied the effects of zooming-in and zooming-out of images in the physical world on DNNs performance. In this paper, we demonstrate a novel physical adversarial attack technique called Adversarial Zoom Lens ( AdvZL ), which uses a zoom lens to zoom in and out of pictures of the physical world, fooling DNNs without changing the characteristics of the target object. The proposed method is so far the only adversarial attack… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 48 REFERENCES

Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink

TLDR
This work shows by simply using a laser beam that DNNs are easily fooled, and proposes a novel attack method called Adversarial Laser Beam (AdvLB), which enables manipulation of laser beam’s physical parameters to perform adversarial attack.

Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon

TLDR
A new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow, to achieve naturalistic and stealthy physical-world adversarial attack under the black-box setting is studied.

Adversarial camera stickers: A physical camera-based attack on deep learning systems

TLDR
This work shows that by placing a carefully crafted and mainly-translucent sticker over the lens of a camera, one can create universal perturbations of the observed images that are inconspicuous, yet misclassify target objects as a different (targeted) class.

Adversarial Attacks Beyond the Image Space

TLDR
Though image-space adversaries can be interpreted as per-pixel albedo change, it is verified that they cannot be well explained along these physically meaningful dimensions, which often have a non-local effect.

Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles

TLDR
Experimental evaluation shows that, in both digital and physical-world scenarios, adversarial examples crafted by Adversarial Camouflage are well camouflaged and highly stealthy, while remaining effective in fooling state-of-the-art DNN image classifiers.

Physical Adversarial Examples for Object Detectors

TLDR
This work improves upon a previous physical attack on image classifiers, and creates perturbed physical objects that are either ignored or mislabeled by object detection models, and implements a Disappearance Attack, which causes a Stop sign to "disappear" according to the detector.

Adversarial T-Shirt! Evading Person Detectors in a Physical World

TLDR
This is the first work that models the effect of deformation for designing physical adversarial examples with respect to-rigid objects such as T-shirts and shows that the proposed method achieves74% and 57% attack success rates in the digital and physical worlds respectively against YOLOv2 and Faster R-CNN.

One Pixel Attack for Fooling Deep Neural Networks

TLDR
This paper proposes a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE), which requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE.

Robust Physical-World Attacks on Deep Learning Models

TLDR
This work proposes a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.

Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World

TLDR
The Dual Attention Suppression (DAS) attack is proposed to generate visually-natural physical adversarial camouflages with strong transferability by suppressing both model and human attention and outperforms state-of-the-art methods.