Generalizability of Adversarial Robustness Under Distribution Shifts

@article{Alhamoud2022GeneralizabilityOA,
  title={Generalizability of Adversarial Robustness Under Distribution Shifts},
  author={Kumail Alhamoud and Hasan Hammoud and Motasem Alfarra and Bernard Ghanem},
  journal={ArXiv},
  year={2022},
  volume={abs/2209.15042}
}
Recent progress in empirical and certified robustness promises to deliver reliable and deployable Deep Neural Networks (DNNs). Despite that success, most existing evaluations of DNN robustness have been done on images sampled from the same distribution on which the model was trained. However, in the real world, DNNs may be deployed in dynamic environments that exhibit significant distribution shifts. In this work, we take a first step towards thoroughly investigating the interplay between… 

References

SHOWING 1-10 OF 69 REFERENCES

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.

Deeper, Broader and Artier Domain Generalization

This paper builds upon the favorable domain shift-robust properties of deep learning methods, and develops a low-rank parameterized CNN model for end-to-end DG learning that outperforms existing DG alternatives.

Towards Deep Learning Models Resistant to Adversarial Attacks

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

In Search of Lost Domain Generalization

This paper implements DomainBed, a testbed for domain generalization including seven multi-domain datasets, nine baseline algorithms, and three model selection criteria, and finds that, when carefully implemented, empirical risk minimization shows state-of-the-art performance across all datasets.

MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius

The MACER algorithm is proposed, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses and can be applied to modern deep neural networks on a wide range of datasets.

Certified Adversarial Robustness via Randomized Smoothing

Strong empirical results suggest that randomized smoothing is a promising direction for future research into adversarially robust classification on smaller-scale datasets where competing approaches to certified $\ell_2$ robustness are viable, smoothing delivers higher certified accuracies.

From Detection of Individual Metastases to Classification of Lymph Node Status at the Patient Level: The CAMELYON17 Challenge

It is shown that simple combinations of the top algorithms result in higher kappa metric values than any algorithm individually, with 0.93 for the best combination.

Theoretically Principled Trade-off between Robustness and Accuracy

The prediction error for adversarial examples (robust error) is decompose as the sum of the natural (classification) error and boundary error, and a differentiable upper bound is provided using the theory of classification-calibrated loss, which is shown to be the tightest possible upper bound uniform over all probability distributions and measurable predictors.

On the Robustness of Quality Measures for GANs

This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr´echet Inception Distance (FID) and shows that such metrics can also be manipulated by additive pixel perturbations.

DeformRS: Certifying Input Deformations with Randomized Smoothing

This work reformulate certification in randomized smoothing setting for both general vector field and parameterized deformations and proposes DeformRS-VF and Deform RS-Par, respectively, which scales to large networks on large input datasets.
...