Towards Practical Certifiable Patch Defense with Vision Transformer

  title={Towards Practical Certifiable Patch Defense with Vision Transformer},
  author={Zhaoyu Chen and Bo Li and Jianghe Xu and Shuang Wu and Shouhong Ding and Wenqiang Zhang},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  • Zhaoyu Chen, Bo Li, Wenqiang Zhang
  • Published 16 March 2022
  • Computer Science
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Patch attacks, one of the most threatening forms of physical attack in adversarial examples, can lead networks to induce misclassification by modifying pixels arbitrarily in a continuous region. Certifiable patch defense can guarantee robustness that the classifier is not affected by patch attacks. Existing certifiable patch defenses sacrifice the clean accuracy of classifiers and only obtain a low certified accuracy on toy datasets. Furthermore, the clean and certified accuracy of these… 

Figures and Tables from this paper

Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation

D EMASKED S MOOTHING is presented, the first approach to certify the robustness of semantic segmentation models against this threat model and can on average certify 64% of the pixel predictions for a 1% patch in the detection task and 48% against a 0.5% patch for the recovery task on the ADE20K dataset.

Towards Efficient Data Free Blackbox Adversarial Attack

  • J ZhangBo Li Chao Wu
  • Computer Science
    2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2022
By rethinking the collaborative relationship between the generator and the substitute model, this paper designs a novel black-box attack framework that can efficiently imitate the target model through a small number of queries and achieve high attack success rate.

Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning

This paper provides a new perspective on how to deal with imbalanced data: adjust the biased decision boundary by training with Guiding Adversarial Examples (GAEs) and proves that the proposed method is comparable to the state-of-the-art method.

Generative Domain Adaptation for Face Anti-Spoofing

This work proposes a novel perspective of UDA FAS that directly fits the target data to the models, and feeds the stylized data into the well-trained source model for classification, and combines two carefully designed consistency constraints.

Federated Learning with Label Distribution Skew via Logits Calibration

The label distribution skew in FL is investigated from a statistical view and FedLC (Federated learning via Logits Calibration), which calibrates the logits before softmax cross-entropy according to the probability of occurrence of each class is proposed.



PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields

The robust masking defense that robustly detects and masks corrupted features to recover the correct prediction is presented, which achieves state-of-the-art provable robust accuracy on ImageNette, ImageNet, and CIFAR-10 datasets.

(De)Randomized Smoothing for Certifiable Defense against Patch Attacks

A certifiable defense against patch attacks that guarantees for a given image and patch attack size, no patch adversarial examples exist, and is related to the broad class of randomized smoothing robustness schemes which provide high-confidence probabilistic robustness certificates.

Detecting Adversarial Patch Attacks through Global-local Consistency

This paper proposes a simple but very effective approach to detect adversarial patches based on an interesting observation called global-local consistency and proposes to use Random-Local-Ensemble strategy to further enhance it in the detection.

Local Gradients Smoothing: Defense Against Localized Adversarial Attacks

This work has developed an effective method to estimate noise location in gradient domain and transform those high activation regions caused by adversarial noise in image domain while having minimal effect on the salient object that is important for correct classification.

Universal Physical Camouflage Attacks on Object Detectors

This paper proposes to learn an adversarial pattern to effectively attack all instances belonging to the same object category, referred to as Universal Physical Camouflage Attack (UPC), which crafts camouflage by jointly fooling the region proposal network, as well as misleading the classifier and the regressor to output errors.

Towards Universal Physical Attacks on Single Object Tracking

The maximum textural discrepancy (MTD) is designed, a resolution-invariant and target location-independent feature de-matching loss that distills global textural information of the template and search images at hierarchical feature scales prior to performing feature attacks.

Robust Physical-World Attacks on Deep Learning Visual Classification

This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.

Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors

A systematic study of adversarial attacks on state-of-the-art object detection frameworks, and a detailed study of physical world attacks using printed posters and wearable clothes, to quantify the performance of such attacks with different metrics.

LaVAN: Localized and Visible Adversarial Noise

It is shown that it is possible to generate localized adversarial noises that cover only 2% of the pixels in the image, none of them over the main object, and that are transferable across images and locations, and successfully fool a state-of-the-art Inception v3 model with very high success rates.

Towards Deep Learning Models Resistant to Adversarial Attacks

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.