APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection

@article{Braunegg2020APRICOTAD,
  title={APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection},
  author={A. Braunegg and Amartya Chakraborty and Michael Krumdick and Nicole Lape and Sara Leary and Keith Manville and Elizabeth M. Merkhofer and Laura Strickhart and Matthew Walmer},
  journal={ArXiv},
  year={2020},
  volume={abs/1912.08166}
}
Physical adversarial attacks threaten to fool object detection systems, but reproducible research on the real-world effectiveness of physical patches and how to defend against them requires a publicly available benchmark dataset. We present APRICOT, a collection of over 1,000 annotated photographs of printed adversarial patches in public locations. The patches target several object categories for three COCO-trained detection models, and the photos represent natural variation in position… Expand
Meta Adversarial Training
TLDR
Meta adversarial training (MAT) is proposed, a novel combination of adversarialTraining with meta-learning, which overcomes this challenge by meta- learning universal perturbations along with model training and considerably increases robustness against universal patch attacks. Expand
Empirical Upper Bound, Error Diagnosis and Invariance Analysis of Modern Object Detectors
  • A. Borji
  • Computer Science, Engineering
  • ArXiv
  • 2020
TLDR
This work taps into the tight relationship between object detection and object recognition and offers insights for building better models and finds that models generate a lot of boxes on empty regions and that context is more important for detecting small objects than larger ones. Expand
Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXI
TLDR
An automatic video inpainting algorithm that can remove traffic agents from videos and synthesize missing regions with the guidance of depth/point cloud is presented, and it is able to fuse multiple videos through 3D point cloud registration, making it possible to inpaint a target video with multiple source videos. Expand
Advances in adversarial attacks and defenses in computer vision: A survey
TLDR
A literature review of the contributions made by the computer vision community in adversarial attacks on deep learning until the advent of year 2018, which focuses on the advances in this area since 2018. Expand
Assistive Signals for Deep Neural Network Classifiers
TLDR
Experimental evaluations show that the assistive signals generated by the optimization method increase the accuracy and confidence of deep models more than those generated by conventional methods that work in the 2D space. Expand
Meta Adversarial Training against Universal Patches
TLDR
Meta adversarial training (MAT) is proposed, a novel combination of adversarialTraining with meta-learning, which overcomes this challenge by meta- learning universal patches along with model training and considerably increases robustness against universal patch attacks on image classification and traffic-light detection. Expand
Physical world assistive signals for deep neural network classifiers - neither defense nor attack
TLDR
Assistive Signals are introduced, which are optimized to improve a model's confidence score regardless if it’s under attack or not and increase the accuracy and confidence of deep models more than those generated by conventional methods that work in the 2D space. Expand
The vulnerability of UAVs: an adversarial machine learning perspective
TLDR
This work describes a methodology for understanding the vulnerability of UAVs to these attacks by threat modeling each potential state and mode of the UAV, from powering-on, to various mission modes and examines one potential threat vector. Expand
Threat of Adversarial Attacks on Deep Learning in Computer Vision: Survey II
TLDR
A literature review of the contributions made by the computer vision community in adversarial attacks on deep learning until the advent of year 2018, which focuses on the advances in this area since 2018. Expand

References

SHOWING 1-10 OF 38 REFERENCES
Robust Physical Adversarial Attack on Faster R-CNN Object Detector
TLDR
This work can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems. Expand
Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection
TLDR
The goal is to generate a patch that is able to successfully hide a person from a person detector, and this work is the first to attempt this kind of attack on targets with a high level of intra-class variety like persons. Expand
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes ofExpand
Robust Physical-World Attacks on Deep Learning Visual Classification
TLDR
This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Expand
NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles
It has been shown that most machine learning algorithms are susceptible to adversarial perturbations. Slightly perturbing an image in a carefully chosen direction in the image space may cause aExpand
Detecting Adversarial Samples from Artifacts
TLDR
This paper investigates model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model, and results show a method for implicit adversarial detection that is oblivious to the attack algorithm. Expand
On Detecting Adversarial Perturbations
TLDR
It is shown empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans. Expand
Adversarial examples in the physical world
TLDR
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples. Expand
Adversarial Patch
TLDR
A method to create universal, robust, targeted adversarial image patches in the real world, which can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class. Expand
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
TLDR
The proposed Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against adversarial perturbations, is empirically shown to be consistently effective against different attack methods and improves on existing defense strategies. Expand
...
1
2
3
4
...