Adversarial Examples for Semantic Segmentation and Object Detection

@article{Xie2017AdversarialEF,
  title={Adversarial Examples for Semantic Segmentation and Object Detection},
  author={Cihang Xie and Jianyu Wang and Zhishuai Zhang and Yuyin Zhou and Lingxi Xie and Alan Loddon Yuille},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
  year={2017},
  pages={1378-1387}
}
It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, cause deep networks to fail on image classification. [] Key Method Based on this, we propose a novel algorithm named Dense Adversary Generation (DAG), which applies to the state-of-the-art networks for segmentation and detection.

Figures and Tables from this paper

Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation
TLDR
It is observed that spatial consistency information can be potentially leveraged to detect adversarial examples robustly even when a strong adaptive attacker has access to the model and detection strategies.
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
TLDR
This paper presents what to their knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets and shows how mean-field inference in deep structured models, multiscale processing and more generally, input transformations naturally implement recently proposed adversarial defenses.
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
TLDR
This paper presents what to their knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets and shows how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses.
Universal Adversarial Perturbations Against Semantic Image Segmentation
TLDR
This work presents an approach for generating (universal) adversarial perturbations that make the network yield a desired target segmentation as output and shows empirically that there exist barely perceptible universal noise patterns which result in nearly the same predicted segmentation for arbitrary inputs.
Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
TLDR
Exhaustive experiments revealed that the proposed attack formulations outperform previous work to craft both digital and real-world adversarial patches for SS, hence questioning the practical relevance of adversarial attacks to SS models for autonomous/assisted driving.
Adversarial Examples on Segmentation Models Can be Easy to Transfer
TLDR
The high transferability achieved by the method shows that, in contrast to the observations in previous work, adversarial examples on a segmentation model can be easy to transfer to other segmentation models.
On the Robustness of Deep Learning Models to Universal Adversarial Attack
TLDR
This work presents rigorous evaluation of adversarial attacks on recent deep learning models for two different high-level tasks (image classification and semantic segmentation) and proposes a model and dataset independent approach to generate adversarial perturbations and also the transferability of perturbation across different datasets and tasks.
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation
TLDR
A dynamic divide-and-conquer adversarial training (DDC-AT) strategy to enhance the defense effect, by set-ting additional branches in the target model during training, and dealing with pixels with diverse properties to-wards adversarial perturbation.
Transferable Adversarial Attacks for Image and Video Object Detection
TLDR
The proposed method is based on the Generative Adversarial Network (GAN) framework, where it combines the high-level class loss and low-level feature loss to jointly train the adversarial example generator, and can efficiently generate image and video adversarial examples that have better transferability.
...
...

References

SHOWING 1-10 OF 47 REFERENCES
Universal Adversarial Perturbations Against Semantic Image Segmentation
TLDR
This work presents an approach for generating (universal) adversarial perturbations that make the network yield a desired target segmentation as output and shows empirically that there exist barely perceptible universal noise patterns which result in nearly the same predicted segmentation for arbitrary inputs.
Adversarial Examples for Semantic Image Segmentation
TLDR
It is shown how existing adversarial attackers can be transferred to the task of semantic segmentation and that it is possible to create imperceptible adversarial perturbations that lead a deep network to misclassify almost all pixels of a chosen class while leaving network prediction nearly unchanged outside this class.
Foveation-based Mechanisms Alleviate Adversarial Examples
TLDR
It is shown that adversarial examples, i.e., the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations---applying the CNN in different image regions, and corroborate that when the neural responses are linear, applying the foveation mechanism to the adversarial example tends to significantly reduce the effect of the perturbation.
On Detecting Adversarial Perturbations
TLDR
It is shown empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans.
Adversarial Transformation Networks: Learning to Generate Adversarial Examples
TLDR
This work efficiently train feed-forward neural networks in a self-supervised manner to generate adversarial examples against a target network or set of networks, and calls such a network an Adversarial Transformation Network (ATN).
R-FCN: Object Detection via Region-based Fully Convolutional Networks
TLDR
This work presents region-based, fully convolutional networks for accurate and efficient object detection, and proposes position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection.
DeepContour: A deep convolutional feature learned by positive-sharing loss for contour detection
TLDR
This work shows that contour detection accuracy can be improved by instead making the use of the deep features learned from convolutional neural networks (CNNs), while rather than using the networks as a blackbox feature extractor, it customize the training strategy by partitioning contour (positive) data into subclasses and fitting each subclass by different model parameters.
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
TLDR
This work addresses the task of semantic image segmentation with Deep Learning and proposes atrous spatial pyramid pooling (ASPP), which is proposed to robustly segment objects at multiple scales, and improves the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models.
Universal Adversarial Perturbations
TLDR
The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers and outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation
TLDR
This paper proposes a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%.
...
...