Corpus ID: 231924774

Data Profiling for Adversarial Training: On the Ruin of Problematic Data

@article{Dong2021DataPF,
  title={Data Profiling for Adversarial Training: On the Ruin of Problematic Data},
  author={Chengyu Dong and Liyuan Liu and Jingbo Shang},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.07437}
}
Multiple intriguing problems hover in adversarial training, including robustness-accuracy trade-off, robust overfitting, and gradient masking, posing great challenges to both reliable evaluation and practical deployment. Here, we show that these problems share one common cause—low quality samples in the dataset. We first identify an intrinsic property of the data called problematic score and then design controlled experiments to investigate its connections with these problems. Specifically, we… Expand

References

SHOWING 1-10 OF 83 REFERENCES
Overfitting in adversarially robust deep learning
Improving Adversarial Robustness Requires Revisiting Misclassified Examples
Adversarial Examples Are Not Bugs, They Are Features
Robustness May Be at Odds with Accuracy
Theoretically Principled Trade-off between Robustness and Accuracy
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Geometry-aware instancereweighted adversarial training
  • ArXiv
  • 2020
...
1
2
3
4
5
...