# Accelerating Certified Robustness Training via Knowledge Transfer

@article{Vaishnavi2022AcceleratingCR,
title={Accelerating Certified Robustness Training via Knowledge Transfer},
author={Pratik Vaishnavi and Kevin Eykholt and Amir Rahmati},
journal={ArXiv},
year={2022},
volume={abs/2210.14283}
}
• Published 25 October 2022
• Computer Science
• ArXiv
Training deep neural network classiﬁers that are certiﬁably robust against adversarial attacks is critical to ensuring the security and reliability of AI-controlled systems. Although numerous state-of-the-art certiﬁed training methods have been developed, they are computationally expensive and scale poorly with respect to both dataset and network complexity. Widespread usage of certiﬁed training is further hindered by the fact that periodic retraining is necessary to incorporate new data and…

## References

SHOWING 1-10 OF 42 REFERENCES

• Computer Science
NeurIPS
• 2019
This work establishes a connection between robustness against adversarial perturbation and additive random noise, and proposes a training strategy that can significantly improve the certified bounds.
• Computer Science
2019 IEEE/CVF International Conference on Computer Vision (ICCV)
• 2019
This work shows how a simple bounding technique, interval bound propagation (IBP), can be exploited to train large provably robust neural networks that beat the state-of-the-art in verified accuracy and allows the largest model to be verified beyond vacuous bounds on a downscaled version of IMAGENET.
• Computer Science
NeurIPS
• 2021
The proposed training scheme trains on convex combinations of samples along the direction of adversarial perturbation for each input, which effectively identifies over-confident, near off-class samples as a cause of limited robustness in case of smoothed classifiers, and offers an intuitive way to adaptively set a new decision boundary between these samples for better robustness.
• Computer Science
ICLR
• 2020
CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks, and outperform all previous linear relaxation and bound propagation based certified defenses in $\ell_\infty$ robustness.
• Computer Science
NeurIPS
• 2018
This paper presents a technique for extending these training procedures to much more general networks, with skip connections and general nonlinearities, and shows how to further improve robust error through cascade models.
• Computer Science
2021 IEEE/CVF International Conference on Computer Vision (ICCV)
• 2021
The effectiveness of the RSLAD approach over existing adversarial training and distillation methods in improving the robustness of small models against state-of-the-art attacks including the AutoAttack is empirically demonstrated.
• Computer Science
ICLR
• 2020
This paper extends randomized smoothing procedures to handle arbitrary smoothing measures and prove robustness of the smoothed classifier by using \$f-divergences and achieves state-of-the-art certified robustness on MNIST, CIFAR-10 and ImageNet and also audio classification task, Librispeech, with respect to several classes of adversarial perturbations.
• Computer Science
ICLR
• 2018
This work proposes a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value, providing an adaptive regularizer that encourages robustness against all attacks.
• Computer Science
ICLR
• 2018
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
• Computer Science
AAAI
• 2020
It is found that a large amount of robustness may be inherited by the student even when distilled on only clean images, and Adversarially Robust Distillation (ARD) is introduced for distilling robustness onto student networks.