Minimal Adversarial Examples for Deep Learning on 3D Point Clouds

  title={Minimal Adversarial Examples for Deep Learning on 3D Point Clouds},
  author={Jaeyeon Kim and Binh-Son Hua and Duc Thanh Nguyen and Sai-Kit Yeung},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
With recent developments of convolutional neural net-works, deep learning for 3D point clouds has shown significant progress in various 3D scene understanding tasks, e.g., object recognition, semantic segmentation. In a safety-critical environment, it is however not well understood how such deep learning models are vulnerable to adversarial examples. In this work, we explore adversarial attacks for point cloud-based neural networks. We propose a unified formulation for adversarial point cloud… 

Figures and Tables from this paper

Explainability-Aware One Point Attack for Point Cloud Neural Networks

This work proposes two new attack methods: one-point attack (OPA) and critical traverse attack (CTA), which go in the opposite direction: the authors restrict the perturbation dimensions to a human cognizable range with the help of explainability methods, which enables the working principle or decision boundary of the models to be comprehensible through the observable perturbations magnitude.

Passive Defense Against 3D Adversarial Point Clouds Through the Lens of 3D Steganalysis

This work designs a 3D adversarial point cloud detector that is double-blind, that is to say, it does not rely on the exact knowledge of the adversarial attack means and victim models, and can achieve good detection performance on multiple types of 3D adversary point clouds.

PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples

It is concluded that existing completion models are severely vulnerable to adversarial examples, and state-of-the-art defenses for point cloud classification will be partially invalid when applied to incomplete and uneven point cloud data.

Isometric 3D Adversarial Examples in the Physical World

Experiments on typical point cloud recognition models validate that the proposed novel (cid:15) -isometric attack can improve the attack success rate and naturalness of the generated 3D adversarial examples than the state-of-the-art attack methods.

Model-Free Prediction of Adversarial Drop Points in 3D Point Clouds

This paper aims to provide a novel viewpoint on this problem, in which adversarial points can be predicted independently of the model, and provides further insight into DNNs for point cloud analysis, by showing which features play key roles in their decision-making process.

NormalAttack: Curvature-Aware Shape Deformation along Normals for Imperceptible Point Cloud Attack

A novel NormalAttack framework towards imperceptible adversarial attacks on point clouds is proposed, which enforces the perturbation to be concentrated along normals to deform the underlying surface of 3D point clouds, such that tiny perturbations can make the shape deformed for better attack performance.

Shape Prior Guided Attack: Sparser Perturbations on 3D Point Clouds

This paper proposes a novel method named SPGA (Shape Prior Guided Attack) to generate adversarial point cloud examples that can generate examples with a higher attack success rate, less perturbation budget and stronger transferability.

OccAM's Laser: Occlusion-based Attribution Maps for 3D Object Detectors on LiDAR Data

This paper proposes a method to generate attribution maps for the detected objects in order to better understand the behavior of black-box models and shows a detailed evaluation of the attribution maps, demonstrating that they are interpretable and highly informative.

3DeformRS: Certifying Spatial Deformations on Point Clouds

3DeformRS, a method to certify the robustness of point cloud Deep Neural Networks (DNNs) against real-world deformations, is proposed and is fast, scales well with point cloud size, and provides comparable-to-better certificates.

Robust Structured Declarative Classifiers for 3D Point Clouds: Defending Adversarial Attacks with Implicit Gradients

An effective and efficient instantiation of the proposed family of robust structured declarative classifiers for point cloud classification, namely, Lattice Point Classifier (LPC), based on structured sparse coding in the permutohedral lattice and 2D convolutional neural networks that is end-to-end trainable is proposed.



Adversarial point perturbations on 3D objects

This work proposes adversarial attacks based on solving different optimization problems, like minimizing the perceptibility of the authors' generated adversarial examples, or maintaining a uniform density distribution of points across the adversarial object surfaces.

Robustness of 3D Deep Learning in an Adversarial Setting

This work develops an algorithm for analysis of pointwise robustness of neural networks that operate on 3D data and uses it to evaluate an array of state-of-the-art models in order to demonstrate their vulnerability to occlusion attacks.

PointCloud Saliency Maps

A novel way of characterizing critical points and segments to build point-cloud saliency maps is proposed, and each saliency score can be efficiently measured by the corresponding gradient of the loss w.r.t the point under the spherical coordinates.

Generating 3D Adversarial Point Clouds

  • Chong XiangC. QiBo Li
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
This work proposes several novel algorithms to craft adversarial point clouds against PointNet, a widely used deep neural network for point cloud processing and formulate six perturbation measurement metrics tailored to the attacks in point clouds.

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.

Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers

Overall, it is found that 3D point cloud classifiers are weak to adversarial attacks, but they are also more easily defensible compared to 2D image classifiers.

3D ShapeNets: A deep representation for volumetric shapes

This work proposes to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network, and shows that this 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.

Revisiting Point Cloud Classification: A New Benchmark Dataset and Classification Model on Real-World Data

This paper introduces ScanObjectNN, a new real-world point cloud object dataset based on scanned indoor scene data, and proposes new point cloud classification neural networks that achieve state-of-the-art performance on classifying objects with cluttered background.

IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function based Restoration

The experimental results show that IF-Defense achieves the state-of-the-art defense performance against all existing adversarial attacks on PointNet, PointNet++, DGCNN and PointConv.

Self-Robust 3D Point Recognition via Gather-Vector Guidance

In this paper, we look into the problem of 3D adversary attack, and propose to leverage the internal properties of the point clouds and the adversarial examples to design a new self-robust deep