Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
@article{Rossolini2022DefendingFP, title={Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis}, author={Giulio Rossolini and Federico Nesti and Fabio Brau and Alessandro Biondi and Giorgio C. Buttazzo}, journal={ArXiv}, year={2022}, volume={abs/2203.07341} }
This work presents Z-Mask , an effective and deterministic strategy to improve the adversarial robustness of convolutional networks against physically-realizable adversarial at- tacks. The presented defense relies on specific Z-score analysis performed on the internal network features to detect and mask the pixels corresponding to adversarial objects in the input image. To this end, spatially contiguous activations are examined in shallow and deep layers to suggest potential adversarial regions…
Figures and Tables from this paper
One Citation
CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
- Computer ScienceArXiv
- 2022
CLA-GEAR is presented, a tool for the automatic generation of photo-realistic synthetic datasets that can be used for a systematic evaluation of the adversarial robustness of neural models against physical adversarial patches, as well as for comparing the performance of different adversarial defense/detection methods.
References
SHOWING 1-10 OF 52 REFERENCES
Deep Dual-resolution Networks for Real-time and Accurate Semantic Segmentation of Road Scenes
- Computer ScienceArXiv
- 2021
Novel deep dual-resolution networks (DDRNets) are proposed for real-time semantic segmentation of road scenes and a new contextual information extractor named Deep Aggregation Pyramid Pooling Module (DAPPM) is designed to enlarge effective receptive fields and fuse multi-scale context.
APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection
- Computer ScienceECCV
- 2020
The results suggest that adversarial patches can be effectively flagged, both in a high-knowledge, attack-specific scenario, and in an unsupervised setting where patches are detected as anomalies in natural images.
Local Gradients Smoothing: Defense Against Localized Adversarial Attacks
- Computer Science2019 IEEE Winter Conference on Applications of Computer Vision (WACV)
- 2019
This work has developed an effective method to estimate noise location in gradient domain and transform those high activation regions caused by adversarial noise in image domain while having minimal effect on the salient object that is important for correct classification.
On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
- Computer Science
- 2022
An extensive evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches, including digital, simulated, and physical ones, reveals that its impact is often spatially confined to areas of the image around the patch.
Adversarial Pixel Masking: A Defense against Physical Attacks for Pre-trained Object Detectors
- Computer ScienceACM Multimedia
- 2021
This paper proposes adversarial pixel masking (APM), a defense against physical attacks, which is designed specifically for pre-trained object detectors, and shows that APM can significantly improve model robustness without significantly degrading clean performance.
Real-time Detection of Practical Universal Adversarial Perturbations
- Computer ScienceArXiv
- 2021
HyperNeuron is able to simultaneously detect both adversarial mask and patch UAPs with comparable or better performance than existing UAP defenses whilst introducing a significantly reduced latency of only 0.86 milliseconds per image, suggesting that many realistic and practical universal attacks can be reliably mitigated in real-time, which shows promise for the robust deployment of machine learning systems.
Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors
- Computer ScienceECCV
- 2020
A systematic study of adversarial attacks on state-of-the-art object detection frameworks, and a detailed study of physical world attacks using printed posters and wearable clothes, to quantify the performance of such attacks with different metrics.
Defending Against Physically Realizable Attacks on Image Classification
- Computer Science, MathematicsICLR
- 2020
A new abstract adversarial model is proposed, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and two approaches for efficiently computing the resulting adversarial examples are developed.
Focal Loss for Dense Object Detection
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2020
This paper proposes to address the extreme foreground-background class imbalance encountered during training of dense detectors by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples, and develops a novel Focal Loss, which focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training.
Synthesizing Robust Adversarial Examples
- Computer ScienceICML
- 2018
The existence of robust 3D adversarial objects is demonstrated, and the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations is presented, which synthesizes two-dimensional adversarial images that are robust to noise, distortion, and affine transformation.