Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation

@article{Xu2020DynamicDA,
  title={Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation},
  author={Xiaogang Xu and Hengshuang Zhao and Jiaya Jia},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2020},
  pages={7466-7475}
}
Adversarial training is promising for improving robustness of deep neural networks towards adversarial perturbations, especially on the classification task. The effect of this type of training on semantic segmentation, contrarily, just commences. We make the initial attempt to explore the defense strategy on semantic segmentation by formulating a general adversarial training procedure that can per-form decently on both adversarial and clean samples. We propose a dynamic divide-and-conquer… 

Figures and Tables from this paper

SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness

A convergence analysis is provided to show the proposed SegPGD can create more effective adversarial examples than PGD under the same number of attack iterations, and to apply it as the underlying attack method for segmentation adversarial training.

Adversarial Examples on Segmentation Models Can be Easy to Transfer

The high transferability achieved by the method shows that, in contrast to the observations in previous work, adversarial examples on a segmentation model can be easy to transfer to other segmentation models.

Proximal Splitting Adversarial Attacks for Semantic Segmentation

The attack can handle large numbers of constraints within a nonconvex minimization framework via an Augmented Lagrangian approach, coupled with adaptive constraint scaling and masking strategies, and push current limits concerning robustness evaluations in segmentation tasks.

Adversarially Robust Prototypical Few-Shot Segmentation with Neural-ODEs

A novel robust few-shot segmentation framework, Prototypical Neural Ordinary Diinary Differential Equation (PNODE), that provides defense against gradient-based adversarial attacks and is more robust compared to traditional adversarial defense mechanisms such as adversarial training.

General Adversarial Defense Against Black-box Attacks via Pixel Level and Feature Level Distribution Alignments

This paper uses Deep Generative Networks with a novel training mechanism to eliminate the distribution gap between adversarial and clean samples in feature space of the target DNNs, and proposes a more effective pixel-level training constraint to make this achievable, thus enhancing robustness on adversarial samples.

Boosting Adversarial Training with Hypersphere Embedding

This work advocates incorporating the hypersphere embedding (HE) mechanism into the AT procedure by regularizing the features onto compact manifolds, which constitutes a lightweight yet effective module to blend in the strength of representation learning.

Robust Prototypical Few-Shot Organ Segmentation with Regularized Neural-ODEs

Regularized Prototypical Neural Ordinary Differential Equation (R-PNODE), a method that leverages intrinsic properties of Neural-ODEs, assisted and enhanced by additional cluster and consistency losses to perform Few-Shot Segmentation (FSS) of organs, is proposed.

Unsupervised Adversarial Defense through Tandem Deep Image Priors

The unsupervised image restoration framework, deep image prior, can effectively eliminate the influence of adversarial perturbations and achieves higher classification accuracy on ImageNet than previous state-of-the-art defense methods.

GRAPH NEURAL NETWORKS

This work proposes the first collective robustness certificate which computes the number of predictions that are simultaneously guaranteed to remain stable under perturbation, i.e. cannot be attacked.

Performance Prediction for Semantic Segmentation by a Self-Supervised Image Reconstruction Decoder

This paper proposes a novel per-image performance prediction for semantic segmentation with no need for additional sensors, sensors, or additional training data, and demonstrates its effectiveness with a new state-of-the-art benchmark both on KITTI and Cityscapes for image-only input methods.

References

SHOWING 1-10 OF 51 REFERENCES

On the Robustness of Semantic Segmentation Models to Adversarial Attacks

This paper presents what to their knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets and shows how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses.

Adversarial Examples for Semantic Segmentation and Object Detection

This paper proposes a novel algorithm named Dense Adversary Generation (DAG), which applies to the state-of-the-art networks for segmentation and detection, and finds that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks.

Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation

It is observed that spatial consistency information can be potentially leveraged to detect adversarial examples robustly even when a strong adaptive attacker has access to the model and detection strategies.

Improved Noise and Attack Robustness for Semantic Segmentation by Using Multi-Task Training with Self-Supervised Depth Estimation

This paper proposes to improve robustness by a multi-task training, which extends supervised semantic segmentation by a self-supervised monocular depth estimation on unlabeled videos, and shows the effectiveness of the method on the Cityscapes dataset, where it consistently outperforms the single-task semantic segmentsation baseline.

Adversarial Learning for Semi-supervised Semantic Segmentation

It is shown that the proposed discriminator can be used to improve semantic segmentation accuracy by coupling the adversarial loss with the standard cross entropy loss of the proposed model.

Semantic Segmentation using Adversarial Networks

An adversarial training approach to train semantic segmentation models that can detect and correct higher-order inconsistencies between ground truth segmentation maps and the ones produced by the segmentation net.

Robust Semantic Segmentation by Redundant Networks With a Layer-Specific Loss Contribution and Majority Vote

This work proposes in this work a novel error detection and correction scheme with application to semantic segmentation that obtains its robustnesss by an online-adapted and therefore hard-to-attack student DNN during vehicle operation, which builds upon a novel layer-dependent inverse feature matching (IFM) loss.

Stochastic Activation Pruning for Robust Adversarial Defense

Stochastic Activation Pruning (SAP) is proposed, a mixed strategy for adversarial defense that prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate.

Towards Deep Neural Network Architectures Robust to Adversarial Examples

Deep Contractive Network is proposed, a model with a new end-to-end training procedure that includes a smoothness penalty inspired by the contractive autoencoder (CAE) to increase the network robustness to adversarial examples, without a significant performance penalty.

PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples

Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of
...