Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks

@article{Mustafa2019AdversarialDB,
  title={Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks},
  author={Aamir Mustafa and S. Khan and Munawar Hayat and Roland G{\"o}cke and J. Shen and L. Shao},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={3384-3393}
}
  • Aamir Mustafa, S. Khan, +3 authors L. Shao
  • Published 2019
  • Computer Science
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
Deep neural networks are vulnerable to adversarial attacks, which can fool them by adding minuscule perturbations to the input images. [...] Key Method Specifically, we force the features for each class to lie inside a convex polytope that is maximally separated from the polytopes of other classes. In this manner, the network is forced to learn distinct and distant decision regions for each class. We observe that this simple constraint on the features greatly enhances the robustness of learned models, even…Expand
Deeply Supervised Discriminative Learning for Adversarial Defense.
Stylized Adversarial Defense
LAFEAT: Piercing Through Adversarial Defenses with Latent Features
  • Yunrui Yu, Xitong Gao, Cheng-Zhong Xu
  • Computer Science
  • ArXiv
  • 2021
Robust Defense Against Lp-Norm-Based Attacks by Learning Robust Representations
A Useful Taxonomy for Adversarial Robustness of Neural Networks
Attack Agnostic Adversarial Defense via Visual Imperceptible Bound
Optimal Transport as a Defense Against Adversarial Attacks
Mitigating the Impact of Adversarial Attacks in Very Deep Networks
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 45 REFERENCES
Towards Deep Learning Models Resistant to Adversarial Attacks
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
Boosting Adversarial Attacks with Momentum
  • Y. Dong, Fangzhou Liao, +4 authors J. Li
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
Improving Transferability of Adversarial Examples With Input Diversity
Certified Defenses against Adversarial Examples
Stochastic Activation Pruning for Robust Adversarial Defense
Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser
Mitigating adversarial effects through randomization
...
1
2
3
4
5
...