The Translucent Patch: A Physical and Universal Attack on Object Detectors
@article{Zolfi2020TheTP, title={The Translucent Patch: A Physical and Universal Attack on Object Detectors}, author={Alon Zolfi and Moshe Kravchik and Yuval Elovici and Asaf Shabtai}, journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2020}, pages={15227-15236} }
Physical adversarial attacks against object detectors have seen increasing success in recent years. However, these attacks require direct access to the object of interest in order to apply a physical patch. Furthermore, to hide multiple objects, an adversarial patch must be applied to each object. In this paper, we propose a contactless translucent physical patch containing a carefully constructed pattern, which is placed on the camera’s lens, to fool state-of-the-art object detectors. The…
Figures and Tables from this paper
30 Citations
Attacking Object Detector Using A Universal Targeted Label-Switch Patch
- Computer ScienceArXiv
- 2022
This study proposes a novel, universal, targeted, label-switch attack against the state-of-the-art object detector, YOLO, which uses a tailored projection function to enable the placement of the adversarial patch on multiple target objects in the image.
IPatch: A Remote Adversarial Patch
- Computer ScienceArXiv
- 2021
It is found that the patch can change the classification of a remote target region with a success rate of up to 93% on average, and can be extended to object recognition models with preliminary results on the popular YOLOv3 model.
Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors
- Computer Science
- 2022
A universal adversarial perturbation (UAP) that targets a widely used technique integrated in many object detector pipelines – non-maximum suppression (NMS).
Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation
- Computer ScienceArXiv
- 2022
NMS-Sponge is proposed, a novel approach that negatively affects the decision latency of YOLO, a state-ofthe-art object detector, and compromises the model’s availability by applying a universal adversarial perturbation (UAP).
Can Optical Trojans Assist Adversarial Perturbations?
- Computer Science2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
- 2021
This work simulates a physically realizable Trojaned lens to attach to a camera that only causes the neural network vision pipeline to produce incorrect classifications if a specific adversarial patch is present in the scene.
Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models
- Computer ScienceArXiv
- 2021
Adversarial Mask is proposed, a physical adversarial universal perturbation (UAP) against state-of-the-art FR models that is applied on face masks in the form of a carefully crafted pattern.
Adversarial Mask: Real-World Universal Adversarial Attack on Face Recognition Model
- Computer Science
- 2021
Adversarial Mask is proposed, a physical universal adversarial perturbation (UAP) against state-of-the-art FR models that is applied on face masks in the form of a carefully crafted pattern that is validated in real-world experiments by printing the adversarial pattern on a fabric face mask.
Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose?
- Computer Science, Mathematics2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
- 2021
A new metric called mean Attack Success over Transformations (mAST) is developed to evaluate patch attack robustness and invariance and new insights are provided into the existence of a fundamental cutoff limit in patch attack effectiveness that depends on the extent of out-of-plane rotation angles.
Adversarial Sticker: A Stealthy Attack Method in the Physical World.
- Computer ScienceIEEE transactions on pattern analysis and machine intelligence
- 2022
This paper proposes Meaningful Adversarial Sticker, a physically feasible and stealthy attack method by using real stickers existing in the authors' life, which manipulates the sticker's pasting position, rotation angle on the objects to perform physical attacks.
Research on Adversarial Attack Technology for Object Detection in Physical World Based on Vision
- Computer Science2022 Asia Conference on Algorithms, Computing and Machine Learning (CACML)
- 2022
This paper talks about physical world attack methods in object detection in three lines: background introduction, research progress and future directions, in order to provide references for subsequent research.
References
SHOWING 1-10 OF 35 REFERENCES
Universal Physical Camouflage Attacks on Object Detectors
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
This paper proposes to learn an adversarial pattern to effectively attack all instances belonging to the same object category, referred to as Universal Physical Camouflage Attack (UPC), which crafts camouflage by jointly fooling the region proposal network, as well as misleading the classifier and the regressor to output errors.
Adversarial camera stickers: A physical camera-based attack on deep learning systems
- Computer ScienceICML
- 2019
This work shows that by placing a carefully crafted and mainly-translucent sticker over the lens of a camera, one can create universal perturbations of the observed images that are inconspicuous, yet misclassify target objects as a different (targeted) class.
Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
- 2019
The goal is to generate a patch that is able to successfully hide a person from a person detector, and this work is the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.
Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors
- Computer ScienceECCV
- 2020
A systematic study of adversarial attacks on state-of-the-art object detection frameworks, and a detailed study of physical world attacks using printed posters and wearable clothes, to quantify the performance of such attacks with different metrics.
Robust Physical Adversarial Attack on Faster R-CNN Object Detector
- Computer ScienceECML/PKDD
- 2018
This work can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.
Adversarial T-Shirt! Evading Person Detectors in a Physical World
- Computer ScienceECCV
- 2020
This is the first work that models the effect of deformation for designing physical adversarial examples with respect to-rigid objects such as T-shirts and shows that the proposed method achieves74% and 57% attack success rates in the digital and physical worlds respectively against YOLOv2 and Faster R-CNN.
Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
Experimental evaluation shows that, in both digital and physical-world scenarios, adversarial examples crafted by Adversarial Camouflage are well camouflaged and highly stealthy, while remaining effective in fooling state-of-the-art DNN image classifiers.
Robust Physical-World Attacks on Deep Learning Visual Classification
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.
One Pixel Attack for Fooling Deep Neural Networks
- Computer ScienceIEEE Transactions on Evolutionary Computation
- 2019
This paper proposes a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE), which requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE.
Simple Black-Box Adversarial Perturbations for Deep Networks
- Computer ScienceArXiv
- 2016
This work focuses on deep convolutional neural networks and demonstrates that adversaries can easily craft adversarial examples even without any internal knowledge of the target network.