Certified Patch Robustness via Smoothed Vision Transformers

@article{Salman2021CertifiedPR,
  title={Certified Patch Robustness via Smoothed Vision Transformers},
  author={Hadi Salman and Saachi Jain and Eric Wong and Aleksander Mkadry},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={15116-15126}
}
Certified patch defenses can guarantee robustness of an image classifier to arbitrary changes within a bounded contiguous region. But, currently, this robustness comes at a cost of degraded standard accuracies and slower inference times. We demonstrate how using vision transformers enables significantly better certified patch robustness that is also more computationally efficient and does not incur a substantial drop in standard accuracy. These improvements stem from the inherent ability of the… 

Are Vision Transformers Robust to Patch Perturbations?

It is found that ViTs are more robust to naturally corrupted patches than CNNs, whereas they are more vulnerable to adversarial patches, and the attention mechanism greatly affects the robustness of vision transformers.

ViP: Unified Certified Detection and Recovery for Patch Attack with Vision Transformers

This paper provides the very first study on developing certified detection against the dual patch attack, in which the attacker is allowed to adversarially manipulate pixels in two different regions.

Evaluating Model Robustness to Patch Perturbations

ViTs are more robust to naturally corrupted patches than CNNs, whereas they are more vulnerable to adversarial patches, so the robustness to natural patch corruption and adversarial patch attack is added into the robusts benchmark.

On the Adversarial Robustness of Vision Transformers

It is shown that ViTs possess better adversarial robustness when compared with MLP-Mixer and convolutional neural networks (CNNs) including ConvNeXt, and this observation also holds for certified robustness, and adversarial training is also applicable to ViT for training robust models and sharpness-aware minimization can also help improve robustness.

Confidence-aware Training of Smoothed Classifiers for Certified Robustness

A simple training method leveraging the fundamental trade-off between accuracy and (adversar- ial) robustness to obtain robust smoothed classifiers, in particular, through a sample-wise control of robustness over the training samples.

Towards Efficient Adversarial Training on Vision Transformers

This work comprehensively study fast adversarial training on a variety of vision transformers and proposes an efficient Attention Guided Adversarial Training mechanism, which matches the state-of-the-art results on the challenging ImageNet benchmark.

Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation

D EMASKED S MOOTHING is presented, the first approach to certify the robustness of semantic segmentation models against this threat model and can on average certify 64% of the pixel predictions for a 1% patch in the detection task and 48% against a 0.5% patch for the recovery task on the ADE20K dataset.

Improved techniques for deterministic l2 robustness

This work intro-duces a procedure to certify robustness of 1 -Lipschitz CNNs by replacing the last linear layer with a 1 -hidden layer MLP that significantly improves their performance for both standard and provably robust accuracy.

Towards Better Input Masking for Convolutional Neural Networks

The ability to remove features from the input of machine learning models is very important to understand and inter-pret model predictions. However, this is non-trivial for vision models since masking

ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding Attacks via Patch-agnostic Masking

The key insight in ObjectSeeker is patch-agnostic masking, which aims to mask out the entire adversarial patch without knowing the shape, size, and location of the patch, which neutralizes the adversarial effect and allows any vanilla object detector to safely detect objects on the masked images.

References

SHOWING 1-10 OF 62 REFERENCES

Intriguing Properties of Vision Transformers

Effective features of ViTs are shown to be due to flexible and dynamic receptive fields possible via self-attention mechanisms, leading to high accuracy rates across a range of classification datasets in both traditional and few-shot learning paradigms.

Efficient Certified Defenses Against Patch Attacks on Image Classifiers

This work derives a loss that enables end-to-end optimization of certified robustness against patches of different sizes and locations and proposes BAGCERT, a novel combination of model architecture and certification procedure that allows efficient certification.

Denoised Smoothing: A Provable Defense for Pretrained Classifiers

This method allows public vision API providers and users to seamlessly convert pretrained non-robust classification services into provably robust ones by prepending a custom-trained denoiser to any off-the-shelf image classifier and using randomized smoothing.

PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking

This paper proposes a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches, and presents the robust masking defense that robustly detects and masks corrupted features to recover the correct prediction.

Are Transformers More Robust Than CNNs?

This paper challenges the previous belief that Transformers outshine CNNs when measuring adversarial robustness, and suggests CNNs can easily be as robust as Transformers on defending against adversarial attacks, if they properly adopt Transformers’ training recipes.

Certified Defenses for Adversarial Patches

An extension of certified defense algorithms is presented and a significantly faster variants for robust training against patch attacks are proposed, observing that robustness to such attacks transfers surprisingly well.

Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation

This paper proposes an efficient and certifiably robust defense against sparse adversarial attacks by randomly ablating input features, rather than using additive noise, and empirically demonstrates that the classifier is highly robust to modern sparse adversarian attacks on MNIST.

(De)Randomized Smoothing for Certifiable Defense against Patch Attacks

A certifiable defense against patch attacks that guarantees for a given image and patch attack size, no patch adversarial examples exist, and is related to the broad class of randomized smoothing robustness schemes which provide high-confidence probabilistic robustness certificates.

Efficient Neural Network Robustness Certification with General Activation Functions

This paper introduces CROWN, a general framework to certify robustness of neural networks with general activation functions for given input data points and facilitates the search for a tighter certified lower bound by adaptively selecting appropriate surrogates for each neuron activation.

Robust Physical Adversarial Attack on Faster R-CNN Object Detector

This work can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.
...