On the Sensitivity of Adversarial Robustness to Input Data Distributions

@article{Ding2019OnTS,
  title={On the Sensitivity of Adversarial Robustness to Input Data Distributions},
  author={Gavin Weiguang Ding and Kry Yik-Chau Lui and Xiaomeng Jin and Luyu Wang and Ruitong Huang},
  journal={ArXiv},
  year={2019},
  volume={abs/1902.08336}
}
Neural networks are vulnerable to small adversarial perturbations. Existing literature largely focused on understanding and mitigating the vulnerability of learned models. In this paper, we demonstrate an intriguing phenomenon about the most popular robust training method in the literature, adversarial training: Adversarial robustness, unlike clean accuracy, is sensitive to the input data distribution. Even a semantics-preserving transformations on the input data distribution can cause a… CONTINUE READING
10
Twitter Mentions

Figures, Tables, and Topics from this paper.

References

Publications referenced by this paper.

Similar Papers

Loading similar papers…