Corpus ID: 220055664

Smooth Adversarial Training

@article{Xie2020SmoothAT,
  title={Smooth Adversarial Training},
  author={Cihang Xie and Mingxing Tan and Boqing Gong and A. Yuille and Quoc V. Le},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.14536}
}
  • Cihang Xie, Mingxing Tan, +2 authors Quoc V. Le
  • Published 2020
  • Computer Science
  • ArXiv
  • It is commonly believed that networks cannot be both accurate and robust, that gaining robustness means losing accuracy. It is also generally believed that, unless making networks larger, network architectural elements would otherwise matter little in improving adversarial robustness. Here we present evidence to challenge these common beliefs by a careful study about adversarial training. Our key observation is that the widely-used ReLU activation function significantly weakens adversarial… CONTINUE READING
    Abstract Universal Approximation for Neural Networks

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 51 REFERENCES
    Adversarial Robustness through Local Linearization
    52
    Fast is better than free: Revisiting adversarial training
    59
    Improving Adversarial Robustness Requires Revisiting Misclassified Examples
    17
    On the Convergence and Robustness of Adversarial Training
    43
    Adversarial Training for Free!
    123
    Towards Deep Learning Models Resistant to Adversarial Attacks
    2201
    Intriguing Properties of Adversarial Training at Scale
    11
    Stochastic Activation Pruning for Robust Adversarial Defense
    212
    Convergence of Adversarial Training in Overparametrized Networks
    6