• Corpus ID: 250479318

Adversarially Robust Vision Transformers

@inproceedings{Debenedetti2022AdversariallyRV,
  title={Adversarially Robust Vision Transformers},
  author={Edoardo Debenedetti and Prateek Mittal},
  year={2022}
}

On the interplay of adversarial robustness and architecture components: patches, convolution and attention

This work compares several (non)-robust classifiers with different architectures and studies their properties, including the effect of adversarial training on the interpretability of the learnt features and robustness to unseen threat models.

Adversarial Robustness against Multiple and Single lp-Threat Models via Quick Fine-Tuning of Robust Classifiers

This paper proposes Extreme norm Adversarial Training (E-AT) for multiple-norm robustness which is based on geometric properties of l p -balls and shows that for ImageNet a single epoch and for CIFAR-10 three epochs are needed to turn any l p-robust model into a multiple- norm robust model.

Diffusion Visual Counterfactual Explanations

This paper generates Diffusion Visual Counterfactual Explanations (DVCEs) for arbitrary ImageNet classifiers via a diffusion process using an adaptive parameterization, whose hyperparameters generalize across images and models, together with distance regularization and late start of the diffusion process.

References

SHOWING 1-10 OF 75 REFERENCES

On Improving Adversarial Transferability of Vision Transformers

This work observes that adversarial patterns found via conventional adversarial attacks show very low black-box transferability even for large ViT models, and proposes a novel approach involving multiple discriminative pathways and token refinement, able to achieve performance boosts when applied over a range of state-of-the-art attack methods.

Are Vision Transformers Robust to Patch Perturbations?

It is found that ViTs are more robust to naturally corrupted patches than CNNs, whereas they are more vulnerable to adversarial patches, and the attention mechanism greatly affects the robustness of vision transformers.

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

This paper standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications, and proposes a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.

Adversarial Patch

A method to create universal, robust, targeted adversarial image patches in the real world, which can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class.

mixup: Beyond Empirical Risk Minimization

This work proposes mixup, a simple learning principle that trains a neural network on convex combinations of pairs of examples and their labels, which improves the generalization of state-of-the-art neural network architectures.

On the Robustness of Vision Transformers to Adversarial Examples

This paper studies the robustness of Vision Transformers to adversarial examples, and shows that an ensemble can achieve unprecedented robustness without sacrificing clean accuracy under a black-box adversary.

Reveal of Vision Transformers Robustness against Adversarial Attacks

This work studies the robustness of ViT variants against different Lp-based adversarial attacks in comparison with CNNs and under Adversarial Examples (AEs) after applying preprocessing defense methods, and reveals that vanilla ViT or hybrid-ViT are more robust than CNNs.

Adversarial Robustness as a Prior for Learned Representations

This work shows that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks, and indicates adversarial robustness as a promising avenue for improving learned representations.

Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

This paper conducts a comprehensive investigation on the impact of network width and depth on the robustness of adversarially trained DNNs and provides a theoretical analysis explaning why such network configuration can help robustness.

Adversarially Robust Generalization Requires More Data

It is shown that already in a simple natural data model, the sample complexity of robust learning can be significantly larger than that of "standard" learning.
...