A3T: Adversarially Augmented Adversarial Training
@article{Erraqabi2018A3TAA, title={A3T: Adversarially Augmented Adversarial Training}, author={Akram Erraqabi and A. Baratin and Yoshua Bengio and S. Lacoste-Julien}, journal={ArXiv}, year={2018}, volume={abs/1801.04055} }
Recent research showed that deep neural networks are highly sensitive to so-called adversarial perturbations, which are tiny perturbations of the input data purposely designed to fool a machine learning classifier. Most classification models, including deep learning models, are highly vulnerable to adversarial attacks. In this work, we investigate a procedure to improve adversarial robustness of deep neural networks through enforcing representation invariance. The idea is to train the… CONTINUE READING
8 Citations
On the Sensitivity of Adversarial Robustness to Input Data Distributions
- Computer Science, Mathematics
- ICLR
- 2019
- 21
- PDF
Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations
- Mathematics, Computer Science
- ArXiv
- 2018
- 28
- PDF
Defense-PointNet: Protecting PointNet Against Adversarial Attacks
- Computer Science, Engineering
- 2019 IEEE International Conference on Big Data (Big Data)
- 2019
- 3
- Highly Influenced
- PDF
A+D Net: Training a Shadow Detector with Adversarial Shadow Attenuation
- Computer Science
- ECCV
- 2018
- 30
- Highly Influenced
- PDF
DNNGuard: An Elastic Heterogeneous DNN Accelerator Architecture against Adversarial Attacks
- Computer Science
- ASPLOS
- 2020
- 3
- PDF
References
SHOWING 1-10 OF 16 REFERENCES
Explaining and Harnessing Adversarial Examples
- Computer Science, Mathematics
- ICLR
- 2015
- 6,236
- Highly Influential
- PDF
Universal Adversarial Perturbations
- Computer Science, Mathematics
- 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
- 1,055
- PDF
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
- Computer Science
- 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
- 1,961
- PDF
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
- Computer Science, Mathematics
- 2016 IEEE Symposium on Security and Privacy (SP)
- 2016
- 1,510
- PDF
Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples
- Computer Science
- ArXiv
- 2016
- 369
- PDF
Domain-Adversarial Training of Neural Networks
- Computer Science, Mathematics
- J. Mach. Learn. Res.
- 2016
- 2,884
- PDF