On the Adversarial Robustness of Camera-based 3D Object Detection

@article{Xie2023OnTA,
  title={On the Adversarial Robustness of Camera-based 3D Object Detection},
  author={Shaoyuan Xie and Zichao Li and Zeyu Wang and Cihang Xie},
  journal={ArXiv},
  year={2023},
  volume={abs/2301.10766}
}
In recent years, camera-based 3D object detection has gained widespread attention for its ability to achieve high performance with low computational cost. However, the robustness of these methods to adversarial attacks has not been thoroughly examined. In this study, we conduct the first comprehensive investigation of the robustness of leading camera-based 3D object detection methods under various adversarial conditions. Our experiments reveal five interesting findings: (a) the use of accurate… 

RoboBEV: Towards Robust Bird's Eye View Perception under Corruptions

RoboBEV is introduced, a comprehensive benchmark suite that encompasses eight distinct corruptions, including Bright, Dark, Fog, Snow, Motion Blur, Color Quant, Camera Crash, and Frame Lost, and provides valuable insights for designing future BEV models that can achieve both accuracy and robustness in real-world deployments.

Robo3D: Towards Robust and Reliable 3D Perception against Corruptions

Robo3D is presented, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios against natural corruptions that occur in real-world environments and proposes a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency.

On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving

An extensive evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches, including digital, simulated, and physical ones, reveals that its impact is often spatially confined to areas of the image around the patch.

DPATCH: An Adversarial Patch Attack on Object Detectors

Extensive evaluations imply that DPatch can perform effective attacks under black-box setup, i.e., even without the knowledge of the attacked network's architectures and parameters, making it very practical to implement real-world attacks.

Physically Realizable Adversarial Examples for LiDAR Object Detection

  • J. TuMengye Ren R. Urtasun
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
This paper presents a method to generate universal 3D adversarial objects to fool LiDAR detectors and demonstrates that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDar detectors with a success rate of 80%.

Adversarial Robustness under Long-Tailed Distribution

The negative impacts induced by imbalanced data on both recognition performance and adversarial robustness are revealed and a clean yet effective framework, RoBal, is proposed, which consists of two dedicated modules, a scale-invariant classifier and data re-balancing via both margin engineering at training stage and boundary adjustment during inference.

Adversarial Examples for Semantic Segmentation and Object Detection

This paper proposes a novel algorithm named Dense Adversary Generation (DAG), which applies to the state-of-the-art networks for segmentation and detection, and finds that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks.

Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks

This paper presents the first study of security issues of MSF-based perception in AD systems by exploring the possibility of attacking all fusion sources simultaneously and formulate the attack as an optimization problem to generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.

BEVDepth: Acquisition of Reliable Depth for Multi-view 3D Object Detection

A new 3D object detector with a trustworthy depth estimation, dubbed BEVDepth, for camera-based Bird's-Eye-View (BEV)3D object detection and achieves the new state-of-the-art 60.9% NDS on the challenging nuScenes test set while maintaining high efficiency.

PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images

The PETRv2 framework is proposed, a unified framework for 3D perception from multi-view images based on PETR, which explores the effectiveness of temporal modeling, which utilizes the temporal information of previous frames to boost 3D object detection.

Universal Adversarial Perturbations

The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers and outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.

On the Adversarial Robustness of Vision Transformers

This work provides a comprehensive study on the robustness of vision transformers (ViTs) against adversarial perturbations and shows adversarial training is also applicable to ViT for training robust models, and sharpness-aware minimization can also help improve robustness, while pre-training with clean images on larger datasets does not significantly improve adversarial robustness.