# Towards Evaluating and Training Verifiably Robust Neural Networks

@article{Lyu2021TowardsEA,
title={Towards Evaluating and Training Verifiably Robust Neural Networks},
author={Zhaoyang Lyu and Minghao Guo and Tong Wu and Guodong Xu and Kehuan Zhang and Dahua Lin},
journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2021},
pages={4306-4315}
}
• Published 1 April 2021
• Computer Science
• 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Recent works have shown that interval bound propagation (IBP) can be used to train verifiably robust neural networks. Reseachers observe an intriguing phenomenon on these IBP trained networks: CROWN, a bounding method based on tight linear relaxation, often gives very loose bounds on these networks. We also observe that most neurons become dead during the IBP training process, which could hurt the representation capability of the network. In this paper, we study the relationship between IBP and…

## Figures and Tables from this paper

### IBP Regularization for Verified Adversarial Robustness via Branch-and-Bound

• Computer Science
ArXiv
• 2022
It is shown that IBP-R obtains state-of-the-art veriﬁed robustness-accuracy trade-offs for small perturbations on CIFAR-10 while training faster than relevant previous work.

### Fast Certified Robust Training with Short Warmup

• Computer Science
NeurIPS
• 2021
This paper identifies two important issues in existing methods, namely exploded bounds at initialization, and the imbalance in ReLU activation states, and proposes three improvements that will mitigate these issues and conduct certified training with shorter warmup.

### Boosting the Certified Robustness of L-infinity Distance Nets

• Computer Science
ICLR
• 2022
It is shown that using the proposed training strategy, the certified accuracy of `∞-distance net can be dramatically improved from 33.30% to 40.06% on CIFAR-10, meanwhile outperforming other approaches in this area by a large margin.

### CerDEQ: Certifiable Deep Equilibrium Model

• Computer Science
ICML
• 2022
This work aims to tackle the problem of DEQ’s certified training, and obtains the certifiable DEQ called CerDEQ, which can achieve state-of-the-art performance compared with models using regular convolution and linear layers on ℓ ∞ tasks.

### Interval Bound Propagation-aided Few-shot Learning

• Computer Science
ArXiv
• 2022
This work introduces the notion of interval bounds from the provably robust training literature to few-shot learning and introduces a novel strategy to artificially form new tasks for training by interpolating between the available tasks and their respective interval bounds to aid in cases with a scarcity of tasks.

### O N THE C ONVERGENCE OF C ERTIFIED R OBUST T RAINING WITH I NTERVAL B OUND P ROPAGATION

• Computer Science
• 2022
It is shown that when using IBP training to train a randomly initialized two-layer ReLU neural network with logistic loss, gradient descent can linearly converge to zero robust training error with a high probability if the authors have sufﬁciently small perturbation radius and large network width.

### On the Convergence of Certified Robust Training with Interval Bound Propagation

• Computer Science
ICLR
• 2022
It is shown that when using IBP training to train a randomly initialized two-layer ReLU neural network with logistic loss, gradient descent can linearly converge to zero robust training error with a high probability if the authors have sufficiently small perturbation radius and large network width.

### Robust Natural Language Processing: Recent Advances, Challenges, and Future Directions

• Computer Science
IEEE Access
• 2022
This paper presents a structured overview of NLP robustness research by summarizing the literature in a systemic way across various dimensions, and takes a deep-dive into the various dimensions of robustness, across techniques, metrics, embedding, and benchmarks.

### SoK: Certified Robustness for Deep Neural Networks

• Computer Science
ArXiv
• 2020
This paper provides a taxonomy for the robustness verification and training approaches, and provides an open-sourced unified platform to evaluate 20+ representative verification and corresponding robust training approaches on a wide range of DNNs.

## References

SHOWING 1-10 OF 33 REFERENCES

### Towards Stable and Efficient Training of Verifiably Robust Neural Networks

• Computer Science
ICLR
• 2020
CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks, and outperform all previous linear relaxation and bound propagation based certified defenses in $\ell_\infty$ robustness.

### Wide Residual Networks

• Computer Science
BMVC
• 2016
This paper conducts a detailed experimental study on the architecture of ResNet blocks and proposes a novel architecture where the depth and width of residual networks are decreased and the resulting network structures are called wide residual networks (WRNs), which are far superior over their commonly used thin and very deep counterparts.

### Efficient Neural Network Robustness Certification with General Activation Functions

• Computer Science
NeurIPS
• 2018
This paper introduces CROWN, a general framework to certify robustness of neural networks with general activation functions for given input data points and facilitates the search for a tighter certified lower bound by adaptively selecting appropriate surrogates for each neuron activation.

### Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond

• Computer Science
NeurIPS
• 2020
This work develops an automatic framework to enable perturbation analysis on any neural network structures, by generalizing existing LiRPA algorithms such as CROWN to operate on general computational graphs and yields an open-source library for the community to applyLiRPA to areas beyond certified defense without much LiR PA expertise.

### Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming

• Computer Science
NeurIPS
• 2020
A first-order dual SDP algorithm that requires memory only linear in the total number of network activations, and only requires a fixed number of forward/backward passes through the network per iteration, enabling efficient use of hardware like GPUs/TPUs is proposed.

### Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks

• Computer Science
ICML
• 2020
Two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function are proposed and combined with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.

### Improving Adversarial Robustness Requires Revisiting Misclassified Examples

• Computer Science
ICLR
• 2020
This paper proposes a new defense algorithm called MART, which explicitly differentiates the misclassified and correctly classified examples during the training, and shows that MART and its variant could significantly improve the state-of-the-art adversarial robustness.

### Overfitting in adversarially robust deep learning

• Computer Science
ICML
• 2020
It is found that overfitting to the training set does in fact harm robust performance to a very large degree in adversarially robust training across multiple datasets (SVHN, CifAR-10, CIFAR-100, and ImageNet) and perturbation models.

### Fastened CROWN: Tightened Neural Network Robustness Certificates

• Computer Science
AAAI
• 2020
This work demonstrates the optimality of deterministic CROWN (Zhang et al. 2018) solutions in a given linear programming problem under mild constraints and proposes an optimization-based approach FROWN (Fastened CROWN): a general algorithm to tighten robustness certificates for neural networks.

### Scalable Verified Training for Provably Robust Image Classification

• Computer Science
2019 IEEE/CVF International Conference on Computer Vision (ICCV)
• 2019
This work shows how a simple bounding technique, interval bound propagation (IBP), can be exploited to train large provably robust neural networks that beat the state-of-the-art in verified accuracy and allows the largest model to be verified beyond vacuous bounds on a downscaled version of IMAGENET.