Corpus ID: 6689934

A3T: Adversarially Augmented Adversarial Training

@article{Erraqabi2018A3TAA,
  title={A3T: Adversarially Augmented Adversarial Training},
  author={Akram Erraqabi and A. Baratin and Yoshua Bengio and S. Lacoste-Julien},
  journal={ArXiv},
  year={2018},
  volume={abs/1801.04055}
}
  • Akram Erraqabi, A. Baratin, +1 author S. Lacoste-Julien
  • Published 2018
  • Computer Science, Mathematics
  • ArXiv
  • Recent research showed that deep neural networks are highly sensitive to so-called adversarial perturbations, which are tiny perturbations of the input data purposely designed to fool a machine learning classifier. Most classification models, including deep learning models, are highly vulnerable to adversarial attacks. In this work, we investigate a procedure to improve adversarial robustness of deep neural networks through enforcing representation invariance. The idea is to train the… CONTINUE READING
    8 Citations

    Figures, Tables, and Topics from this paper

    On the Sensitivity of Adversarial Robustness to Input Data Distributions
    • 21
    • PDF
    Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations
    • 28
    • PDF
    Adversarial Example Games
    • PDF
    Defense-PointNet: Protecting PointNet Against Adversarial Attacks
    • 3
    • Highly Influenced
    • PDF
    A+D Net: Training a Shadow Detector with Adversarial Shadow Attenuation
    • 30
    • Highly Influenced
    • PDF
    DNNGuard: An Elastic Heterogeneous DNN Accelerator Architecture against Adversarial Attacks
    • 3
    • PDF
    Classification of Noisy Epileptic EEG Signals Using Fortified Long Short-term Memory Network

    References

    SHOWING 1-10 OF 16 REFERENCES
    On Detecting Adversarial Perturbations
    • 497
    • PDF
    mixup: Beyond Empirical Risk Minimization
    • 1,295
    • PDF
    Ensemble Adversarial Training: Attacks and Defenses
    • 1,101
    • PDF
    Explaining and Harnessing Adversarial Examples
    • 6,236
    • Highly Influential
    • PDF
    Universal Adversarial Perturbations
    • 1,055
    • PDF
    DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
    • 1,961
    • PDF
    Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
    • 1,510
    • PDF
    Domain-Adversarial Training of Neural Networks
    • 2,884
    • PDF
    Adversarial examples in the physical world
    • 2,186
    • PDF