Multitask Learning Strengthens Adversarial Robustness

@inproceedings{Mao2020MultitaskLS,
  title={Multitask Learning Strengthens Adversarial Robustness},
  author={Chengzhi Mao and Amogh Gupta and Vikram Nitin and Baishakhi Ray and Shuran Song and Junfeng Yang and Carl Vondrick},
  booktitle={ECCV},
  year={2020}
}
Although deep networks achieve strong accuracy on a range of computer vision benchmarks, they remain vulnerable to adversarial attacks, where imperceptible input perturbations fool the network. We present both theoretical and empirical analyses that connect the adversarial robustness of a model to the number of tasks that it is trained on. Experiments on two datasets show that attack difficulty increases as the number of target tasks increase. Moreover, our results suggest that when models are… Expand
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation
TLDR
A dynamic divide-and-conquer adversarial training (DDC-AT) strategy to enhance the defense effect, by setting additional branches in the target model during training, and dealing with pixels with diverse properties towards adversarial perturbation. Expand
Advances in adversarial attacks and defenses in computer vision: A survey
Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neuralExpand
Adversarial Attacks are Reversible with Natural Supervision
TLDR
It is found that images contain intrinsic structure that enables the reversal of many adversarial attacks, suggesting deep networks are vulnerable to adversarial examples partly because their representations do not enforce the natural structure of images. Expand
Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving
TLDR
Detailed adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection. Expand
Adversarial example generation with AdaBelief Optimizer and Crop Invariance
TLDR
This paper proposes AdaBelief Iterative Fast Gradient Method (ABI-FGM) and Crop-Invariant attack Method (CIM) to improves the transferability of adversarial examples and has higher success rates than state of theart gradient-based attack methods. Expand
Boosting Adversarial Attacks on Neural Networks with Better Optimizer
TLDR
A modified Adam gradient descent algorithm is combined with the iterative gradient-based attack method to improve the transferability of adversarial examples and shows that the proposed method offers a higher attack success rate than existing iterative methods. Expand
Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation
TLDR
A variety of local robustness properties and a -global robustness property of DNNs are focused on, and novel strategies to combine the constraint solving and abstraction-based approaches to work with these properties are investigated. Expand
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving
TLDR
This paper showcases practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle, and shows that a single universal adversary can hide different host vehicles from state-of-the-art multi-modal detectors. Expand
How benign is benign overfitting?
TLDR
This work identifies label noise as one of the causes for adversarial vulnerability, and provides theoretical and empirical evidence in support of this and conjecture that in part the need for complex decision boundaries arises from sub-optimal representation learning. Expand
Multi-Task Learning for Dense Prediction Tasks: A Survey.
TLDR
This survey provides a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision, explicitly emphasizing on dense prediction tasks. Expand
...
1
2
...

References

SHOWING 1-10 OF 64 REFERENCES
First-Order Adversarial Vulnerability of Neural Networks and Input Dimension
TLDR
It is shown that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs, and that this dimension dependence persists after either usual or robust training, but gets attenuated with higher regularization. Expand
Deep Defense: Training DNNs with Improved Adversarial Robustness
TLDR
This work proposes a training recipe named "deep defense", which integrates an adversarial perturbation-based regularizer into the classification objective, such that the obtained models learn to resist potential attacks, directly and precisely. Expand
Metric Learning for Adversarial Robustness
TLDR
An empirical analysis of deep representations under the state-of-the-art attack method called PGD finds that the attack causes the internal representation to shift closer to the ``false'' class, and proposes to regularize the representation space under attack with metric learning to produce more robust classifiers. Expand
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
TLDR
It is demonstrated that regularizing input gradients makes them more naturally interpretable as rationales for model predictions, and also exhibits robustness to transferred adversarial examples generated to fool all of the other models. Expand
Boosting Adversarial Attacks with Momentum
TLDR
A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. Expand
Improving Adversarial Robustness via Promoting Ensemble Diversity
TLDR
A new notion of ensemble diversity in the adversarial setting is defined as the diversity among non-maximal predictions of individual members, and an adaptive diversity promoting (ADP) regularizer is presented to encourage the diversity, which leads to globally better robustness for the ensemble by making adversarial examples difficult to transfer among individual members. Expand
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
Feature Denoising for Improving Adversarial Robustness
TLDR
It is suggested that adversarial perturbations on images lead to noise in the features constructed by these networks, and new network architectures are developed that increase adversarial robustness by performing feature denoising. Expand
Universal Adversarial Perturbations Against Semantic Image Segmentation
TLDR
This work presents an approach for generating (universal) adversarial perturbations that make the network yield a desired target segmentation as output and shows empirically that there exist barely perceptible universal noise patterns which result in nearly the same predicted segmentation for arbitrary inputs. Expand
...
1
2
3
4
5
...