Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
@article{Nesti2021EvaluatingTR, title={Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks}, author={Federico Nesti and Giulio Rossolini and Saasha Nair and Alessandro Biondi and Giorgio C. Buttazzo}, journal={2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, year={2021}, pages={2826-2835} }
Deep learning and convolutional neural networks allow achieving impressive performance in computer vision tasks, such as object detection and semantic segmentation (SS). However, recent studies have shown evident weaknesses of such models against adversarial perturbations. In a real-world scenario instead, like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs), which are physical objects (e.g., billboards and printable patches) optimized to be…
Figures and Tables from this paper
20 Citations
Detecting Adversarial Perturbations in Multi-Task Perception
- Computer Science2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
- 2022
A novel adversarial perturbation detection scheme based on multi-task perception of complex vision tasks, which detects inconsistencies between extracted edges of the input image, the depth output, and the segmentation output by developing a novel edge consistency loss between all three modalities.
Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation
- Computer ScienceArXiv
- 2022
D EMASKED S MOOTHING is presented, the first approach to certify the robustness of semantic segmentation models against this threat model and can on average certify 64% of the pixel predictions for a 1% patch in the detection task and 48% against a 0.5% patch for the recovery task on the ADE20K dataset.
A Comparative Study of Adversarial Attacks against Point Cloud Semantic Segmentation
- Computer Science
- 2021
All of the PCSS models are vulnerable under both targeted and non-targeted attacks, and attacks against point features like color are more effective, so the research community is called on to develop new approaches to hardenPCSS models against adversarial attacks.
SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness
- Computer ScienceECCV
- 2022
A convergence analysis is provided to show the proposed SegPGD can create more effective adversarial examples than PGD under the same number of attack iterations, and to apply it as the underlying attack method for segmentation adversarial training.
CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
- Computer ScienceArXiv
- 2022
CLA-GEAR is presented, a tool for the automatic generation of photo-realistic synthetic datasets that can be used for a systematic evaluation of the adversarial robustness of neural models against physical adversarial patches, as well as for comparing the performance of different adversarial defense/detection methods.
Adversarial Examples on Segmentation Models Can be Easy to Transfer
- Computer ScienceArXiv
- 2021
The high transferability achieved by the method shows that, in contrast to the observations in previous work, adversarial examples on a segmentation model can be easy to transfer to other segmentation models.
A Survey on Physical Adversarial Attack in Computer Vision
- Computer ScienceArXiv
- 2022
This paper reviews the development of physical adversarial attacks in DNN-based computer vision tasks, including image recognition tasks, object detection tasks, and semantic segmentation, and presents a categorization scheme to summarize the current physical adversaria attacks.
Physical Adversarial Attack meets Computer Vision: A Decade Survey
- Computer ScienceArXiv
- 2022
This paper defines the adversarial medium, essential to perform attacks in the physical world, and presents the physical adversarial attack methods in task order: classification, detection, and re-identification, and introduces their performance in solving the trilemma: effectiveness, stealthiness, and robustness.
Suppress with a Patch: Revisiting Universal Adversarial Patch Attacks against Object Detection
- Computer Science2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME)
- 2022
An in-depth analysis of different patch generation parameters, including initialization, patch size, and especially positioning a patch in an image during training shows that inserting a patch inside a window of increasing size during training leads to a significant increase in attack strength compared to a fixed position.
Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey
- Computer ScienceArXiv
- 2022
An overview of existing techniques of adversarial patch attacks is provided to help interested researchers quickly catch up with the progress, and existing techniques for developing detection and defences against adversarial patches are discussed to help the community better understand this type of attack and its applications in the real world.
References
SHOWING 1-10 OF 50 REFERENCES
The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
- Computer ScienceIEEE Signal Processing Magazine
- 2021
The goal of this article is to illuminate the vulnerability aspects of CNNs used for semantic segmentation with respect to adversarial attacks, and share insights into some of the existing known adversarial defense strategies.
Adversarial Examples for Semantic Segmentation and Object Detection
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
This paper proposes a novel algorithm named Dense Adversary Generation (DAG), which applies to the state-of-the-art networks for segmentation and detection, and finds that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks.
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This paper presents what to their knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets and shows how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses.
Universal Adversarial Perturbations Against Semantic Image Segmentation
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
This work presents an approach for generating (universal) adversarial perturbations that make the network yield a desired target segmentation as output and shows empirically that there exist barely perceptible universal noise patterns which result in nearly the same predicted segmentation for arbitrary inputs.
AdvSPADE: Realistic Unrestricted Attacks for Semantic Segmentation
- Computer Science
- 2019
This paper demonstrates a simple and effective method to generate unrestricted adversarial examples using conditional generative adversarial networks (CGAN) without any hand-crafted metric and leverage the SPADE (Spatially-adaptive denormalization) structure with an additional loss item to generate effective adversarial attacks in a single step.
Robust Physical-World Attacks on Deep Learning Visual Classification
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.
Adversarial Patch Attacks on Monocular Depth Estimation Networks
- Computer ScienceIEEE Access
- 2020
This work generates artificial patterns that can fool the target methods into estimating an incorrect depth for the regions where the patterns are placed, and analyzes the behavior of monocular depth estimation under attacks by visualizing the activation levels of the intermediate layers and the regions potentially affected by the adversarial attack.
Physically Realizable Adversarial Examples for LiDAR Object Detection
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
This paper presents a method to generate universal 3D adversarial objects to fool LiDAR detectors and demonstrates that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDar detectors with a success rate of 80%.
Attacking Optical Flow
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
This paper extends adversarial patch attacks to optical flow networks and shows that such attacks can compromise their performance, and finds that networks using a spatial pyramid architecture are less affected than networks using an encoder-decoder architecture.
Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting
- Computer ScienceIEEE transactions on neural networks and learning systems
- 2021
This paper extensively explores the detection of adversarial Examples via image transformations and proposes a novel methodology, called defense perturbation, to detect robust adversarial examples with the same input transformations the adversarialExamples are robust to.