Corpus ID: 173188378

Are Labels Required for Improving Adversarial Robustness?

@inproceedings{Uesato2019AreLR,
  title={Are Labels Required for Improving Adversarial Robustness?},
  author={Jonathan Uesato and Jean-Baptiste Alayrac and Po-Sen Huang and Robert Stanforth and Alhussein Fawzi and Pushmeet Kohli},
  booktitle={NeurIPS},
  year={2019}
}
Recent work has uncovered the interesting (and somewhat surprising) finding that training models to be invariant to adversarial perturbations requires substantially larger datasets than those required for standard classification. [...] Key Method On standard datasets like CIFAR-10, a simple Unsupervised Adversarial Training (UAT) approach using unlabeled data improves robust accuracy by 21.7% over using 4K supervised examples alone, and captures over 95% of the improvement from the same number of labeled…Expand
Adversarially Robust Generalization Just Requires More Unlabeled Data
TLDR
It is proved that for a specific Gaussian mixture problem illustrated by [35], adversarially robust generalization can be almost as easy as the standard generalization in supervised learning if a sufficiently large amount of unlabeled data is provided. Expand
Unlabeled Data Improves Adversarial Robustness
TLDR
It is proved that unlabeled data bridges the complexity gap between standard and robust classification: a simple semisupervised learning procedure (self-training) achieves high robust accuracy using the same number of labels required for achieving high standard accuracy. Expand
Robustness to Adversarial Perturbations in Learning from Incomplete Data
TLDR
A generalization theory is developed for Semi-Supervised Learning and Distributionally Robust Learning based on a number of novel complexity measures, such as an adversarial extension of Rademacher complexity and its semi-supervised analogue. Expand
ARMOURED: ADVERSARIALLY ROBUST MODELS
Adversarial attacks pose a major challenge for modern deep neural networks. Recent advancements show that adversarially robust generalization requires a huge amount of labeled data for training. IfExpand
Overfitting in adversarially robust deep learning
TLDR
It is found that overfitting to the training set does in fact harm robust performance to a very large degree in adversarially robust training across multiple datasets (SVHN, CifAR-10, CIFAR-100, and ImageNet) and perturbation models. Expand
S ELF-SUPERVISED A DVERSARIAL R OBUSTNESS FOR THE L OW-LABEL , H IGH-DATA R EGIME
Recent work discovered that training models to be invariant to adversarial perturbations requires substantially larger datasets than those required for standard classification. Perhaps moreExpand
Robust Pre-Training by Adversarial Contrastive Learning
TLDR
This work improves robustness-aware self-supervised pre-training by learning representations that are consistent under both data augmentations and adversarial perturbations, and shows that ACL pre- training can improve semi- supervised adversarial training, even when only a few labeled examples are available. Expand
Where is the Bottleneck of Adversarial Learning with Unlabeled Data?
TLDR
This paper believes that the quality of pseudo labels is the bottleneck of adversarial learning with unlabeled data, and proposes robust co-training (RCT), which trains two deep networks and encourages two networks diverged by exploiting peer's adversarial examples. Expand
A large amount of attacking methods on generating adversarial examples have been introduced in recent years ( Carlini & Wagner , 2017 a
  • 2019
Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e.g., CIFAR-10, which enables good standard accuracy may not suffice to trainExpand
Improving Adversarial Robustness Requires Revisiting Misclassified Examples
TLDR
This paper proposes a new defense algorithm called MART, which explicitly differentiates the misclassified and correctly classified examples during the training, and shows that MART and its variant could significantly improve the state-of-the-art adversarial robustness. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 58 REFERENCES
Adversarially Robust Generalization Just Requires More Unlabeled Data
TLDR
It is proved that for a specific Gaussian mixture problem illustrated by [35], adversarially robust generalization can be almost as easy as the standard generalization in supervised learning if a sufficiently large amount of unlabeled data is provided. Expand
Unlabeled Data Improves Adversarial Robustness
TLDR
It is proved that unlabeled data bridges the complexity gap between standard and robust classification: a simple semisupervised learning procedure (self-training) achieves high robust accuracy using the same number of labels required for achieving high standard accuracy. Expand
Robustness to Adversarial Perturbations in Learning from Incomplete Data
TLDR
A generalization theory is developed for Semi-Supervised Learning and Distributionally Robust Learning based on a number of novel complexity measures, such as an adversarial extension of Rademacher complexity and its semi-supervised analogue. Expand
Rademacher Complexity for Adversarially Robust Generalization
TLDR
For binary linear classifiers, it is shown that the adversarial Rademacher complexity is never smaller than its natural counterpart, and it has an unavoidable dimension dependence, unless the weight vector has bounded $\ell_1$ norm. Expand
Adversarially Robust Generalization Requires More Data
TLDR
It is shown that already in a simple natural data model, the sample complexity of robust learning can be significantly larger than that of "standard" learning. Expand
Improved generalization bounds for robust learning
TLDR
A model of robust learning in an adversarial environment where the learner gets uncorrupted training data with access to possible corruptions that may be affected by the adversary during testing, to build a robust classifier that would be tested on future adversarial examples is considered. Expand
Scaling provable adversarial defenses
TLDR
This paper presents a technique for extending these training procedures to much more general networks, with skip connections and general nonlinearities, and shows how to further improve robust error through cascade models. Expand
Ensemble Adversarial Training: Attacks and Defenses
TLDR
This work finds that adversarial training remains vulnerable to black-box attacks, where perturbations computed on undefended models are transferred to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. Expand
Adversarial Machine Learning at Scale
TLDR
This research applies adversarial training to ImageNet and finds that single-step attacks are the best for mounting black-box attacks, and resolution of a "label leaking" effect that causes adversarially trained models to perform better on adversarial examples than on clean examples. Expand
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
TLDR
A new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input that achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10. Expand
...
1
2
3
4
5
...