Corpus ID: 231924774

Data Profiling for Adversarial Training: On the Ruin of Problematic Data

@article{Dong2021DataPF,
  title={Data Profiling for Adversarial Training: On the Ruin of Problematic Data},
  author={Chengyu Dong and Liyuan Liu and Jingbo Shang},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.07437}
}
Multiple intriguing problems hover in adversarial training, including robustness-accuracy trade-off, robust overfitting, and gradient masking, posing great challenges to both reliable evaluation and practical deployment. Here, we show that these problems share one common cause—low quality samples in the dataset. We first identify an intrinsic property of the data called problematic score and then design controlled experiments to investigate its connections with these problems. Specifically, we… Expand

References

SHOWING 1-10 OF 83 REFERENCES
Overfitting in adversarially robust deep learning
TLDR
It is found that overfitting to the training set does in fact harm robust performance to a very large degree in adversarially robust training across multiple datasets (SVHN, CifAR-10, CIFAR-100, and ImageNet) and perturbation models. Expand
Improving Adversarial Robustness Requires Revisiting Misclassified Examples
TLDR
This paper proposes a new defense algorithm called MART, which explicitly differentiates the misclassified and correctly classified examples during the training, and shows that MART and its variant could significantly improve the state-of-the-art adversarial robustness. Expand
Adversarial Examples Are Not Bugs, They Are Features
TLDR
It is demonstrated that adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. Expand
On the Sensitivity of Adversarial Robustness to Input Data Distributions
TLDR
An intriguing phenomenon about the most popular robust training method, adversarial training, is demonstrated: Adversarial robustness, unlike clean accuracy, is sensitive to the input data distribution. Expand
Robustness May Be at Odds with Accuracy
TLDR
It is shown that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization, and it is argued that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers. Expand
Theoretically Principled Trade-off between Robustness and Accuracy
TLDR
The prediction error for adversarial examples (robust error) is decompose as the sum of the natural (classification) error and boundary error, and a differentiable upper bound is provided using the theory of classification-calibrated loss, which is shown to be the tightest possible upper bound uniform over all probability distributions and measurable predictors. Expand
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
TLDR
This paper motivates the use of adversarial risk as an objective, although it cannot easily be computed exactly, and frames commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarialrisk. Expand
Geometry-aware Instance-reweighted Adversarial Training
TLDR
This paper finds even over-parameterized deep networks may still have insufficient model capacity, because adversarial training has an overwhelming smoothing effect, and argues adversarial data should have unequal importance: geometrically speaking, a natural data point closer to/farther from the class boundary is less/more robust, and the corresponding adversary data point should be assigned with larger/smaller weight. Expand
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
TLDR
A novel approach of friendly adversarial training (FAT) is proposed: rather than employing most adversarial data maximizing the loss, it is proposed to search for least adversarial Data Minimizing the Loss, among the adversarialData that are confidently misclassified. Expand
Geometry-aware instancereweighted adversarial training
  • ArXiv
  • 2020
...
1
2
3
4
5
...