Corpus ID: 195886578

# Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

@article{Cao2019AdversarialOA,
title={Adversarial Objects Against LiDAR-Based Autonomous Driving Systems},
author={Yulong Cao and Chaowei Xiao and Dawei Yang and Jin Fang and Ruigang Yang and Mingyan D. Liu and Bo Li},
journal={ArXiv},
year={2019},
volume={abs/1907.05418}
}
• Yulong Cao, +4 authors Bo Li
• Published 2019
• Computer Science, Mathematics
• ArXiv
Deep neural networks (DNNs) are found to be vulnerable against adversarial examples, which are carefully crafted inputs with a small magnitude of perturbation aiming to induce arbitrarily incorrect predictions. Recent studies show that adversarial examples can pose a threat to real-world security-critical applications: a "physical adversarial Stop Sign" can be synthesized such that the autonomous driving cars will misrecognize it as others (e.g., a speed limit sign). However, these image-space… Expand
54 Citations

#### Figures, Tables, and Topics from this paper

Physically Realizable Adversarial Examples for LiDAR Object Detection
• J. Tu, +5 authors R. Urtasun
• Computer Science
• 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2020
This paper presents a method to generate universal 3D adversarial objects to fool LiDAR detectors and demonstrates that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDar detectors with a success rate of 80%. Expand
• Computer Science
• Int. J. Intell. Syst.
• 2021
Cdar‐adv is introduced, a method for generating image adversarial examples on three‐dimensional (3D) objects, which could potentially lunch a multisensor attack toward the autonomous driving platforms. Expand
Fooling LiDAR Perception via Adversarial Trajectory Perturbation
• Computer Science
• ArXiv
• 2021
Only adversarial spoofing of a self-driving car’s trajectory with small perturbations is enough to make safety-critical objects undetectable or detected with incorrect positions, and polynomial trajectory perturbation is developed to achieve a temporallysmooth and highly-imperceptible attack. Expand
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving
This paper showcases practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle, and shows that a single universal adversary can hide different host vehicles from state-of-the-art multi-modal detectors. Expand
Multi-Source Adversarial Sample Attack on Autonomous Vehicles
• Zuobin Xiong, Honghui Xu, Wei Li
• Computer Science
• IEEE Transactions on Vehicular Technology
• 2021
Two multi-source adversarial sample attack models are proposed, including the parallel attack model and the fusion attack model, to simultaneously attack the image and LiDAR perception systems in the autonomous vehicles. Expand
Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection
A single adversarial object with specific shape and texture on top of a car with the objective of making this car evade detection is placed, and it is found that the fusion model was relatively more robust to adversarial attacks than the cascaded model. Expand
Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures
• Computer Science
• USENIX Security Symposium
• 2020
This work discovers that the ignored occlusion patterns in LiDAR point clouds make self-driving cars vulnerable to spoofing attacks, and constructs the first black-box spoofing attack based on this vulnerability, and proposes SVF that embeds the neglected physical features into end-to-end learning. Expand
• Computer Science
• ArXiv
• 2021
Detailed adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection. Expand
Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object Detection Models
• Computer Science, Engineering
• ArXiv
• 2021
The proposed universal multi-modal attack was successful in reducing the model’s ability to detect a car by nearly 73% and can aid in the understanding of what the cascaded RGB-point cloud DNN learns and its vulnerability to adversarial attacks. Expand
• Computer Science
• ArXiv
• 2021
Meta adversarial training (MAT) is proposed, a novel combination of adversarialTraining with meta-learning, which overcomes this challenge by meta- learning universal perturbations along with model training and considerably increases robustness against universal patch attacks. Expand

#### References

SHOWING 1-10 OF 24 REFERENCES
• Computer Science, Mathematics
• ICLR
• 2018
Perturbations generated through spatial transformation could result in large $\mathcal{L}_p$ distance measures, but the extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems. Expand
Robust Physical-World Attacks on Deep Learning Models
This work proposes a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Expand
• Chong Xiang, Bo Li
• Computer Science
• 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2019
This work proposes several novel algorithms to craft adversarial point clouds against PointNet, a widely used deep neural network for point cloud processing and formulate six perturbation measurement metrics tailored to the attacks in point clouds. Expand
• Computer Science, Mathematics
• IJCAI
• 2018
AdvGAN is proposed to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances, and has high attack success rate under state-of-the-art defenses compared to other attacks. Expand
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation
• Computer Science
• ECCV
• 2018
It is observed that spatial consistency information can be potentially leveraged to detect adversarial examples robustly even when a strong adaptive attacker has access to the model and detection strategies. Expand
Adversarial examples in the physical world
• Computer Science, Mathematics
• ICLR
• 2017
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples. Expand
The Limitations of Deep Learning in Adversarial Settings
• Computer Science, Mathematics
• 2016 IEEE European Symposium on Security and Privacy (EuroS&P)
• 2016
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. Expand
• Computer Science
• ICML
• 2018
The existence of robust 3D adversarial objects is demonstrated, and the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations is presented, which synthesizes two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. Expand