# Robust Sensible Adversarial Learning of Deep Neural Networks for Image Classification

@article{Kim2022RobustSA,
title={Robust Sensible Adversarial Learning of Deep Neural Networks for Image Classification},
author={Jungeum Kim and Xiao Wang},
journal={ArXiv},
year={2022},
volume={abs/2205.10457}
}
• Published 20 May 2022
• Computer Science
• ArXiv
The idea of robustness is central and critical to modern statistical analysis. However, despite the recent advances of deep neural networks (DNNs), many studies have shown that DNNs are vulnerable to adversarial attacks. Making imperceptible changes to an image can cause DNN models to make the wrong classiﬁcation with high conﬁdence, such as classifying a benign mole as a malignant tumor and a stop sign as a speed limit sign. The trade-off between robustness and standard accuracy is common for…

## References

SHOWING 1-10 OF 45 REFERENCES
Towards Deep Learning Models Resistant to Adversarial Attacks
• Computer Science
ICLR
• 2018
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Improving Adversarial Robustness Requires Revisiting Misclassified Examples
• Computer Science
ICLR
• 2020
This paper proposes a new defense algorithm called MART, which explicitly differentiates the misclassified and correctly classified examples during the training, and shows that MART and its variant could significantly improve the state-of-the-art adversarial robustness.
• Computer Science
ICML
• 2019
For binary linear classifiers, it is shown that the adversarial Rademacher complexity is never smaller than its natural counterpart, and it has an unavoidable dimension dependence, unless the weight vector has bounded $\ell_1$ norm.
Are Labels Required for Improving Adversarial Robustness?
• Computer Science
NeurIPS
• 2019
Theoretically, it is shown that in a simple statistical setting, the sample complexity for learning an adversarially robust model from unlabeled data matches the fully supervised case up to constant factors, and this finding extends as well to the more realistic case where unlabeling data is also uncurated, therefore opening a new avenue for improving adversarial training.
Convergence of Adversarial Training in Overparametrized Neural Networks
• Computer Science
NeurIPS
• 2019
This paper provides a partial answer to the success of adversarial training, by showing that it converges to a network where the surrogate loss with respect to the the attack algorithm is within $\epsilon$ of the optimal robust loss.
Interpolated Adversarial Training: Achieving Robust Neural Networks Without Sacrificing Too Much Accuracy
• Computer Science
AISec@CCS
• 2019
This work proposes Interpolated Adversarial Training, which employs recently proposed interpolation based training methods in the framework of adversarial training, which retains adversarial robustness while achieving a standard test error of only 6.45%.
Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Accuracy
• Computer Science
ArXiv
• 2019
This work proposes Interpolated Adversarial Training, which employs recently proposed interpolation based training methods in the framework of adversarial training, which retains adversarial robustness while achieving a clean test error of only 6.5%.
Adversarially Robust Generalization Requires More Data
• Computer Science
NeurIPS
• 2018
It is shown that already in a simple natural data model, the sample complexity of robust learning can be significantly larger than that of "standard" learning.