• Corpus ID: 204852237

Evading Real-Time Person Detectors by Adversarial T-shirt

  title={Evading Real-Time Person Detectors by Adversarial T-shirt},
  author={Kaidi Xu and Gaoyuan Zhang and Sijia Liu and Quanfu Fan and Mengshu Sun and Hongge Chen and Pin-Yu Chen and Yanzhi Wang and Xue Lin},
It is known that deep neural networks (DNNs) could be vulnerable to adversarial attacks. The so-called physical adversarial examples deceive DNN-based decision makers by attaching adversarial patches to real objects. However, most of the existing works on physical adversarial attacks focus on static objects such as glass frame, stop sign and image attached to a cardboard. In this work, we propose Adversarial T-shirt, a robust physical adversarial example for evading person detectors even if it… 

Adversarial Pixel Masking: A Defense against Physical Attacks for Pre-trained Object Detectors

This paper proposes adversarial pixel masking (APM), a defense against physical attacks, which is designed specifically for pre-trained object detectors, and shows that APM can significantly improve model robustness without significantly degrading clean performance.

Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks

The proposed approach can convert a single-step black box adversarial defense into an iterative defense and proposes three novel privacy preserving Knowledge Distillation approaches that use prior meta-information from various datasets to mimic the performance of the Black box classifier.

Adversarial Pixel Masking: Supplementary Materials

The attack model APM can be used to defend against most existing physical attacks provided that the attacks have been considered by the threat model T in Algorithm 1 in the main paper, and an unied objective for generating patch content is dene.

Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors

This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes that is shown to fool state-of-the-art deep object detectors robustly under varying views, potentially leading to an attacking scheme that is persistently strong in the physical world.

Distributed Adversarial Training to Robustify Deep Neural Networks at Scale

DAT is general, which supports training over labeled and unlabeled data, multiple types of attack generation methods, and gradient compression operations favored for distributed optimization, and is demonstrated that DAT either matches or outperforms state-of-the-art robust accuracies and achieves a graceful training speedup.

Surreptitious Adversarial Examples through Functioning QR Code

A novel method of adversarial attack that can conceal its intent from human intuition through the use of a modified QR code that can consistently scanned with a reader while retaining adversarial efficacy against image classification models is developed.

Physical Passive Patch Adversarial Attacks on Visual Odometry Systems

It is shown for the best of the knowledge that the error margin of a visual odometry model can be significantly increased by deploying patch adversarial attacks in the scene, and it is demonstrated that a compa-rable vulnerability exists in real data.

Can 3D Adversarial Logos Cloak Humans?

This paper presents a new 3D adversarial logo attack, which is shown to fool state-of-the-art deep object detectors robustly under model rotations, leading to one step further for realistic attacks in the physical world.

Clipped BagNet: Defending Against Sticker Attacks with Clipped Bag-of-features

This work examines the adversarial sticker attack, where the attacker places a sticker somewhere on an image to induce it to be misclassified, and takes a first step towards defending against such attacks using clipped BagNet, which bounds the influence that any limited-size sticker can have on the final classification.


This work formalizes the RED problem and identifies a set of principles crucial to the RED approach design, and finds that prediction alignment and proper data augmentation are two criteria to achieve a generalizable RED approach.



Robust Physical Adversarial Attack on Faster R-CNN Object Detector

This work can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.

Robust Physical-World Attacks on Deep Learning Visual Classification

This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.

Adversarial camera stickers: A physical camera-based attack on deep learning systems

This work shows that by placing a carefully crafted and mainly-translucent sticker over the lens of a camera, one can create universal perturbations of the observed images that are inconspicuous, yet misclassify target objects as a different (targeted) class.

Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection

The goal is to generate a patch that is able to successfully hide a person from a person detector, and this work is the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.

Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos

This work proposes a new real-world attack against the computer vision based systems of autonomous vehicles (AVs) that exploits the concept of adversarial examples to modify innocuous signs and advertisements in the environment such that they are classified as the adversary's desired traffic sign with high confidence.

Structured Adversarial Attack: Towards General Implementation and Better Interpretability

This work develops a more general attack model, i.e., the structured attack (StrAttack), which explores group sparsity in adversarial perturbations by sliding a mask through images aiming for extracting key spatial structures through adversarial saliency map and class activation map.

Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

The potential vulnerabilities of LiDar-based autonomous driving detection systems are revealed, by proposing an optimization based approach LiDAR-Adv to generate adversarial objects that can evade the LiD AR-based detection system under various conditions.

Adversarial Examples that Fool Detectors

This paper demonstrates a construction that successfully fools two standard detectors, Faster RCNN and YOLO, and produces adversarial examples that generalize well across sequences digitally, even though large perturbations are needed.

Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer

This work proposes a novel evaluation measure, parametric norm-balls, by directly perturbing physical parameters that underly image formation, and presents a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry.

Beyond Adversarial Training: Min-Max Optimization in Adversarial Attack and Defense

This work shows that attacking model ensembles, devising universal perturbation to input samples or data transformations, and generalized AT over multiple norm-ball threat models can be solved under a unified and theoretically principled min-max optimization framework.