Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder

@article{Li2020EnhancingIA,
  title={Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder},
  author={Guanlin Li and Shuya Ding and J. Luo and C. Liu},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={797-805}
}
  • Guanlin Li, Shuya Ding, +1 author C. Liu
  • Published 2020
  • Computer Science, Mathematics
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Whereas adversarial training is employed as the main defence strategy against specific adversarial samples, it has limited generalization capability and incurs excessive time complexity. In this paper, we propose an attack-agnostic defence framework to enhance the intrinsic robustness of neural networks, without jeopardizing the ability of generalizing clean samples. Our Feature Pyramid Decoder (FPD) framework applies to all block-based convolutional neural networks (CNNs). It implants… Expand

References

SHOWING 1-10 OF 32 REFERENCES
Feature Denoising for Improving Adversarial Robustness
  • 345
  • Highly Influential
  • PDF
Deep Defense: Training DNNs with Improved Adversarial Robustness
  • 48
  • PDF
Adversarial Robustness Toolbox v0.2.2
  • 47
  • PDF
Adversarial Attacks against Deep Saliency Models
  • 3
  • PDF
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
  • 391
  • PDF
Interpreting Adversarial Examples by Activation Promotion and Suppression
  • 25
  • PDF
Towards Deep Learning Models Resistant to Adversarial Attacks
  • 3,260
  • Highly Influential
  • PDF
Countering Adversarial Images using Input Transformations
  • 604
  • PDF
MagNet: A Two-Pronged Defense against Adversarial Examples
  • 589
  • PDF
Ensemble Adversarial Training: Attacks and Defenses
  • 1,219
  • PDF
...
1
2
3
4
...