Fooling LiDAR Perception via Adversarial Trajectory Perturbation

@article{Li2021FoolingLP,
  title={Fooling LiDAR Perception via Adversarial Trajectory Perturbation},
  author={Yiming Li and Congcong Wen and Felix Juefei-Xu and Chen Feng},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={7878-7887}
}
  • Yiming LiCongcong Wen Chen Feng
  • Published 29 March 2021
  • Environmental Science
  • 2021 IEEE/CVF International Conference on Computer Vision (ICCV)
LiDAR point clouds collected from a moving vehicle are functions of its trajectories, because the sensor motion needs to be compensated to avoid distortions. When autonomous vehicles are sending LiDAR point clouds to deep networks for perception and planning, could the motion compensation consequently become a wide-open backdoor in those networks, due to both the adversarial vulnerability of deep learning and GPS-based vehicle trajectory estimation that is susceptible to wireless spoofing? We… 

V2XP-ASG: Generating Adversarial Scenes for Vehicle-to-Everything Perception

The first open adversarial scene gener-ator V2XP-ASG is proposed that can produce realistic, challenging scenes for modern LiDAR-based multi-agent perception system and learns to construct an adversarial collaboration graph and simultaneously perturb multiple agents’ poses in an adversary and plausible manner.

SoK: Rethinking Sensor Spoofing Attacks against Robotic Vehicles from a Systematic View

This paper comprehensively systematizing the knowledge of sensor spoofing attacks against RVs and proposes a novel action flow model to systematically describe robotic function executions and sensor spoosng vulnerabilities.

Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection

  • Ruijun GaoQing Guo Song Wang
  • Computer Science
    2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2022
The very first blackbox joint adversarial exposure and noise attack (Jadena), where they jointly and locally tune the exposure and additive perturbations of the image according to a newly designed high-feature-level contrast-sensitive loss function, leads to significant performance degradation on various co- saliency detection datasets and makes the co-salient objects undetectable.

AdvBokeh: Learning to Adversarially Defocus Blur

A Depth-guided Bokeh Synthesis Network (DebsNet) that is able to flexibly synthesis, refocus, and adjust the level of bokeh of the image, with a one-stage training procedure and a depth-guided gradient-based attack to regularize the gradient to improve the realisticity of the adversarial Bokeh.

AVA: Adversarial Vignetting Attack against Visual Recognition

This work proposes radial-anisotropic adversarial vignetting attack (RI-AVA), and proposes the geometry-aware level-set optimization method to solve the adversarialvignetting regions and physical parameters jointly.

Generating Useful Accident-Prone Driving Scenarios via a Learned Traffic Prior

This work introduces STRIVE, a method to automatically generate challenging scenarios that cause a given planner to produce undesirable behavior, like collisions, in the form of a graph-based conditional VAE.

Research Landscape on Robust Perception

My research in general is focused on a fuller understanding of deep learning where I am actively exploring new methods in deep learning that are statistically efficient and adversarially robust and under what conditions deep learning starts to fail.

SoK: On the Semantic AI Security in Autonomous Driving

This paper takes the initiative to develop an open-source, uniform, and extensible system-driven evaluation platform, named PASS, for the semantic AD AI security research community, and uses the implemented platform prototype to showcase the capabilities and benefits of such a platform using representative semantic ADAI attacks.

Benchmarking Shadow Removal for Facial Landmark Detection and Beyond

A novel detection-aware shadow removal framework is designed, which empowers shadow removal to achieve higher restoration quality and enhance the shadow robustness of deployed facial landmark detectors.

WIP: Infrastructure-Aided Defense for Autonomous Driving Systems: Opportunities and Challenges

This paper is the first t o systematically explore such a new AD security design space leveraging emerging infrastructure-side support, which it is called Infrastructure-Aided Autonomous Driving Defense (I-A2D2).

References

SHOWING 1-10 OF 55 REFERENCES

Physically Realizable Adversarial Examples for LiDAR Object Detection

  • J. TuMengye Ren R. Urtasun
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
This paper presents a method to generate universal 3D adversarial objects to fool LiDAR detectors and demonstrates that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDar detectors with a success rate of 80%.

Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

This work performs the first security study of LiDAR-based perception in AV settings, and designs an algorithm that combines optimization and global sampling, which improves the attack success rates to around 75%.

Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

The potential vulnerabilities of LiDar-based autonomous driving detection systems are revealed, by proposing an optimization based approach LiDAR-Adv to generate adversarial objects that can evade the LiD AR-based detection system under various conditions.

Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures

This work discovers that the ignored occlusion patterns in LiDAR point clouds make self-driving cars vulnerable to spoofing attacks, and constructs the first black-box spoofing attack based on this vulnerability, and proposes SVF that embeds the neglected physical features into end-to-end learning.

Geometric Adversarial Attacks and Defenses on 3D Point Clouds

This work is the first to consider the problem of adversarial examples at a geometric level, and demonstrates the robustness of the attack in the case of defense, where it is shown that remnant characteristics of the target shape are still present at the output after applying the defense to the adversarial input.

Generating 3D Adversarial Point Clouds

  • Chong XiangC. QiBo Li
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
This work proposes several novel algorithms to craft adversarial point clouds against PointNet, a widely used deep neural network for point cloud processing and formulate six perturbation measurement metrics tailored to the attacks in point clouds.

Illusion and Dazzle: Adversarial Optical Channel Exploits Against Lidars for Automotive Applications

A spoofing by relaying attack is presented, which can not only induce illusions in the lidar output but can also cause the illusions to appear closer than the location of a spoofing device.

AdvPC: Transferable Adversarial Perturbations on 3D Point Clouds

A new point cloud attack is developed (dubbed AdvPC) that exploits the input data distribution by adding an adversarial loss, after Auto-Encoder reconstruction, to the objective it optimizes, and leads to perturbations that are resilient against current defenses, while remaining highly transferable compared to state-of-the-art attacks.

Adversarial Attack and Defense on Point Sets

An attack and defense scheme for preventing 3D point clouds from manipulated as well as pursuing noise-tolerable 3D representation, and an momentum-enhanced pointwise gradient to improve the attack transferability.

SPARK: Spatial-Aware Online Incremental Attack Against Visual Tracking

The spatial-aware online incremental attack (a.k.a. SPARK) is proposed that performs spatial-temporal sparse incremental perturbations online and makes the adversarial attack less perceptible, making it much more efficient than basic attacks.
...