Corpus ID: 224792875

PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking.

@article{Xiang2020PatchGuardAP,
  title={PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking.},
  author={Chong Xiang and A. Bhagoji and V. Sehwag and Prateek Mittal},
  journal={arXiv: Computer Vision and Pattern Recognition},
  year={2020}
}
Localized adversarial patches aim to induce misclassification in machine learning models by arbitrarily modifying pixels within a restricted region of an image. Such attacks can be realized in the physical world by attaching the adversarial patch to the object to be misclassified, and defending against such attacks is an unsolved/open problem. In this paper, we propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy… Expand

References

SHOWING 1-10 OF 51 REFERENCES
Local Gradients Smoothing: Defense Against Localized Adversarial Attacks
  • 33
  • PDF
Certified Defenses for Adversarial Patches
  • 30
  • Highly Influential
  • PDF
Minority Reports Defense: Defending Against Adversarial Patches
  • 6
  • PDF
Certified Robustness to Adversarial Examples with Differential Privacy
  • 280
  • PDF
Towards Deep Learning Models Resistant to Adversarial Attacks
  • 3,078
  • PDF
Certified Defenses against Adversarial Examples
  • 501
  • PDF
Defending Against Physically Realizable Attacks on Image Classification
  • 26
  • PDF
Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection
  • 135
  • PDF
On Detecting Adversarial Perturbations
  • 528
  • PDF
DPATCH: An Adversarial Patch Attack on Object Detectors
  • 51
  • PDF
...
1
2
3
4
5
...