Corpus ID: 195886578

Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

@article{Cao2019AdversarialOA,
  title={Adversarial Objects Against LiDAR-Based Autonomous Driving Systems},
  author={Yulong Cao and Chaowei Xiao and Dawei Yang and Jin Fang and Ruigang Yang and Mingyan D. Liu and Bo Li},
  journal={ArXiv},
  year={2019},
  volume={abs/1907.05418}
}
Deep neural networks (DNNs) are found to be vulnerable against adversarial examples, which are carefully crafted inputs with a small magnitude of perturbation aiming to induce arbitrarily incorrect predictions. Recent studies show that adversarial examples can pose a threat to real-world security-critical applications: a "physical adversarial Stop Sign" can be synthesized such that the autonomous driving cars will misrecognize it as others (e.g., a speed limit sign). However, these image-space… Expand
Physically Realizable Adversarial Examples for LiDAR Object Detection
  • J. Tu, Mengye Ren, +5 authors R. Urtasun
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
This paper presents a method to generate universal 3D adversarial objects to fool LiDAR detectors and demonstrates that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDar detectors with a success rate of 80%. Expand
Camdar‐adv: Generating adversarial patches on 3D object
TLDR
Cdar‐adv is introduced, a method for generating image adversarial examples on three‐dimensional (3D) objects, which could potentially lunch a multisensor attack toward the autonomous driving platforms. Expand
Fooling LiDAR Perception via Adversarial Trajectory Perturbation
TLDR
Only adversarial spoofing of a self-driving car’s trajectory with small perturbations is enough to make safety-critical objects undetectable or detected with incorrect positions, and polynomial trajectory perturbation is developed to achieve a temporallysmooth and highly-imperceptible attack. Expand
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving
TLDR
This paper showcases practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle, and shows that a single universal adversary can hide different host vehicles from state-of-the-art multi-modal detectors. Expand
Multi-Source Adversarial Sample Attack on Autonomous Vehicles
  • Zuobin Xiong, Honghui Xu, Wei Li, Z. Cai
  • Computer Science
  • IEEE Transactions on Vehicular Technology
  • 2021
TLDR
Two multi-source adversarial sample attack models are proposed, including the parallel attack model and the fusion attack model, to simultaneously attack the image and LiDAR perception systems in the autonomous vehicles. Expand
Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection
TLDR
A single adversarial object with specific shape and texture on top of a car with the objective of making this car evade detection is placed, and it is found that the fusion model was relatively more robust to adversarial attacks than the cascaded model. Expand
Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures
TLDR
This work discovers that the ignored occlusion patterns in LiDAR point clouds make self-driving cars vulnerable to spoofing attacks, and constructs the first black-box spoofing attack based on this vulnerability, and proposes SVF that embeds the neglected physical features into end-to-end learning. Expand
Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving
TLDR
Detailed adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection. Expand
Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object Detection Models
TLDR
The proposed universal multi-modal attack was successful in reducing the model’s ability to detect a car by nearly 73% and can aid in the understanding of what the cascaded RGB-point cloud DNN learns and its vulnerability to adversarial attacks. Expand
Meta Adversarial Training
TLDR
Meta adversarial training (MAT) is proposed, a novel combination of adversarialTraining with meta-learning, which overcomes this challenge by meta- learning universal perturbations along with model training and considerably increases robustness against universal patch attacks. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 24 REFERENCES
Spatially Transformed Adversarial Examples
TLDR
Perturbations generated through spatial transformation could result in large $\mathcal{L}_p$ distance measures, but the extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems. Expand
Robust Physical-World Attacks on Deep Learning Models
TLDR
This work proposes a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Expand
Generating 3D Adversarial Point Clouds
  • Chong Xiang, C. Qi, Bo Li
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This work proposes several novel algorithms to craft adversarial point clouds against PointNet, a widely used deep neural network for point cloud processing and formulate six perturbation measurement metrics tailored to the attacks in point clouds. Expand
Generating Adversarial Examples with Adversarial Networks
TLDR
AdvGAN is proposed to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances, and has high attack success rate under state-of-the-art defenses compared to other attacks. Expand
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation
TLDR
It is observed that spatial consistency information can be potentially leveraged to detect adversarial examples robustly even when a strong adaptive attacker has access to the model and detection strategies. Expand
Adversarial examples in the physical world
TLDR
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples. Expand
The Limitations of Deep Learning in Adversarial Settings
TLDR
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. Expand
Synthesizing Robust Adversarial Examples
TLDR
The existence of robust 3D adversarial objects is demonstrated, and the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations is presented, which synthesizes two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. Expand
MeshAdv: Adversarial Meshes for Visual Recognition
  • Dawei Yang, Chaowei Xiao, Bo Li, Jia Deng, Mingyan D. Liu
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This paper proposes meshAdv to generate "adversarial 3D meshes" from objects that have rich shape features but minimal textural variation, and designs a pipeline to perform black-box attack on a photorealistic renderer with unknown rendering parameters. Expand
Adversarial Geometry and Lighting using a Differentiable Renderer
TLDR
This work proposes novel adversarial attacks that directly alter the geometry of 3D objects and/or manipulate the lighting in a virtual scene and leverages a novel differentiable renderer that is efficient to evaluate and analytically differentiate. Expand
...
1
2
3
...