Corpus ID: 56895505

Practical Adversarial Attack Against Object Detector

@article{Zhao2018PracticalAA,
  title={Practical Adversarial Attack Against Object Detector},
  author={Yue Zhao and Hong Zhu and Qintao Shen and Ruigang Liang and Kai Chen and Shengzhi Zhang},
  journal={ArXiv},
  year={2018},
  volume={abs/1812.10217}
}
In this paper, we proposed the first practical adversarial attacks against object detectors in realistic situations: the adversarial examples are placed in different angles and distances, especially in the long distance (over 20m) and wide angles 120 degree. [...] Key Method Two kinds of attacks were implemented on YOLO V3, a state-of-the-art real-time object detector: hiding attack that fools the detector unable to recognize the object, and appearing attack that fools the detector to recognize the non-existent…Expand
Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking
TLDR
This paper is the first to study adversarial machine learning attacks against the complete visual perception pipeline in autonomous driving, and discovers a novel attack technique, tracker hijacking, that can effectively fool MOT using AEs on object detection. Expand
Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object Tracking
TLDR
This paper is the first to study adversarial machine learning attacks against the complete visual perception pipeline in autonomous driving, and discovers a novel attack technique, tracker hijacking, that can effectively fool MOT using AEs on object detection. Expand
Extended Spatially Localized Perturbation GAN (eSLP-GAN) for Robust Adversarial Camouflage Patches
TLDR
The use of the method called eSLP-GAN was extended to deceive classifiers and object detection systems and the loss function was modified for greater compatibility with an object-detection model attack and to increase robustness in the real world. Expand
Hijacking Tracker: A Powerful Adversarial Attack on Visual Tracking
TLDR
This paper proposes to add slight adversarial perturbations to the input image by an inconspicuous but powerful attack strategy—hijacking algorithm that misleads trackers in two aspects: one is shape hijacking that changes the shape of the model output; the other is position hijacked that gradually pushes the output to any position in the image frame. Expand
Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study
TLDR
This work investigates the feasibility of conducting real-time physical attacks on face recognition systems using adversarial light projections and demonstrates the vulnerability of face Recognition systems to light projection attacks in both white-box and black-box attack settings. Expand
Robust Few-Shot Learning with Adversarially Queried Meta-Learners
TLDR
This work adapts adversarial training for meta-learning, it adapt robust architectural features to small networks for metalearning, it test pre-processing defenses as an alternative to adversarial Training for Meta- learning, and it investigates the advantages of robust meta- learning over robust transfer-learning for few-shot tasks. Expand
Contrastive Learning with Adversarial Examples
TLDR
A new family of adversarial examples for constrastive learning is introduced and used to define a new adversarial training algorithm for SSL, denoted as CLAE, which improves the performance of several existing CL baselines on multiple datasets. Expand
Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
TLDR
This paper proposes a general class of defenses for detecting classifier errors caused by abnormally small input perturbations and presents theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbation. Expand
Robust Assessment of Real-World Adversarial Examples
TLDR
A score is put forth that attempts to address the above issues in a straightforward exemplar application for multiple generated adversary examples and underscores the need for either a more complete report or a score that incorporates scene changes and baseline performance for models and environments tested by adversarial developers. Expand
Robustness Metrics for Real-World Adversarial Examples
We explore metrics to evaluate the robustness of real-world adversarial attacks, in particular adversarial patches, to changes in environmental conditions. We demonstrate how these metrics can beExpand
...
1
2
...

References

SHOWING 1-10 OF 41 REFERENCES
Robust Physical Adversarial Attack on Faster R-CNN Object Detector
TLDR
This work can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems. Expand
Physical Adversarial Examples for Object Detectors
TLDR
This work improves upon a previous physical attack on image classifiers, and creates perturbed physical objects that are either ignored or mislabeled by object detection models, and implements a Disappearance Attack, which causes a Stop sign to "disappear" according to the detector. Expand
NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles
It has been shown that most machine learning algorithms are susceptible to adversarial perturbations. Slightly perturbing an image in a carefully chosen direction in the image space may cause aExpand
Adversarial examples in the physical world
TLDR
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples. Expand
The Limitations of Deep Learning in Adversarial Settings
TLDR
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. Expand
Robust Physical-World Attacks on Machine Learning Models
TLDR
This paper proposes a new attack algorithm--Robust Physical Perturbations (RP2)-- that generates perturbations by taking images under different conditions into account and can create spatially-constrained perturbation that mimic vandalism or art to reduce the likelihood of detection by a casual observer. Expand
Adversarial Examples for Generative Models
TLDR
This work explores methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN and presents three classes of attacks, motivating why an attacker might be interested in deploying such techniques against a target generative network. Expand
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced. Expand
Synthesizing Robust Adversarial Examples
TLDR
The existence of robust 3D adversarial objects is demonstrated, and the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations is presented, which synthesizes two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. Expand
Realistic Adversarial Examples in 3D Meshes
TLDR
This paper aims to project the optimized "adversarial meshes" to 2D with a photorealistic renderer, and still able to mislead different machine learning models, and proposes to synthesize a realistic 3D mesh and put in a scene mimicking similar rendering conditions and therefore attack different machineLearning models. Expand
...
1
2
3
4
5
...