(Certified!!) Adversarial Robustness for Free!

@article{Carlini2022CertifiedAR,
  title={(Certified!!) Adversarial Robustness for Free!},
  author={Nicholas Carlini and Florian Tram{\`e}r and Krishnamurthy Dvijotham and J. Zico Kolter},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.10550}
}
In this paper we show how to achieve state-of-the-art certified adversarial robustness to (cid:96) 2 -norm bounded perturbations by relying exclusively on off-the-shelf pretrained models. To do so, we instantiate the denoised smoothing approach of Salman et al. by combining a pretrained denoising diffusion probabilistic model and a standard high-accuracy classifier. This allows us to certify 71% accuracy on ImageNet under adversarial perturbations constrained to be within an (cid:96) 2 norm of… 

Figures and Tables from this paper

SoK: Certified Robustness for Deep Neural Networks

This paper provides a taxonomy for the robustness verification and training approaches, and provides an open-sourced unified platform to evaluate 20+ representative verification and corresponding robust training approaches on a wide range of DNNs.

CARE: Certifiably Robust Learning with Reasoning via Variational Inference

This paper proposes a certifiably robust learning with reasoning pipeline (CARE), which consists of a learning component and a reasoning component, and proposes to approximate the MLN inference via variational inference based on an ef ficient expectation maximization algorithm.

PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D Point Cloud Recognition

This work identifies that the state-of-the-art empirical defense, adversarial training, has a major limitation in applying to 3D point cloud models due to gradient obfuscation, and proposes PointDP, a purification strategy that leverages diffusion models to defend against 3D adversarial attacks.