Towards Evaluating and Training Verifiably Robust Neural Networks

@article{Lyu2021TowardsEA,
  title={Towards Evaluating and Training Verifiably Robust Neural Networks},
  author={Zhaoyang Lyu and Minghao Guo and Tong Wu and Guodong Xu and Kehuan Zhang and Dahua Lin},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={4306-4315}
}
Recent works have shown that interval bound propagation (IBP) can be used to train verifiably robust neural networks. Reseachers observe an intriguing phenomenon on these IBP trained networks: CROWN, a bounding method based on tight linear relaxation, often gives very loose bounds on these networks. We also observe that most neurons become dead during the IBP training process, which could hurt the representation capability of the network. In this paper, we study the relationship between IBP and… 

Figures and Tables from this paper

IBP Regularization for Verified Adversarial Robustness via Branch-and-Bound

It is shown that IBP-R obtains state-of-the-art verified robustness-accuracy trade-offs for small perturbations on CIFAR-10 while training faster than relevant previous work.

Fast Certified Robust Training with Short Warmup

This paper identifies two important issues in existing methods, namely exploded bounds at initialization, and the imbalance in ReLU activation states, and proposes three improvements that will mitigate these issues and conduct certified training with shorter warmup.

Boosting the Certified Robustness of L-infinity Distance Nets

It is shown that using the proposed training strategy, the certified accuracy of `∞-distance net can be dramatically improved from 33.30% to 40.06% on CIFAR-10, meanwhile outperforming other approaches in this area by a large margin.

CerDEQ: Certifiable Deep Equilibrium Model

This work aims to tackle the problem of DEQ’s certified training, and obtains the certifiable DEQ called CerDEQ, which can achieve state-of-the-art performance compared with models using regular convolution and linear layers on ℓ ∞ tasks.

Interval Bound Propagation-aided Few-shot Learning

This work introduces the notion of interval bounds from the provably robust training literature to few-shot learning and introduces a novel strategy to artificially form new tasks for training by interpolating between the available tasks and their respective interval bounds to aid in cases with a scarcity of tasks.

O N THE C ONVERGENCE OF C ERTIFIED R OBUST T RAINING WITH I NTERVAL B OUND P ROPAGATION

It is shown that when using IBP training to train a randomly initialized two-layer ReLU neural network with logistic loss, gradient descent can linearly converge to zero robust training error with a high probability if the authors have sufficiently small perturbation radius and large network width.

On the Convergence of Certified Robust Training with Interval Bound Propagation

It is shown that when using IBP training to train a randomly initialized two-layer ReLU neural network with logistic loss, gradient descent can linearly converge to zero robust training error with a high probability if the authors have sufficiently small perturbation radius and large network width.

Robust Natural Language Processing: Recent Advances, Challenges, and Future Directions

This paper presents a structured overview of NLP robustness research by summarizing the literature in a systemic way across various dimensions, and takes a deep-dive into the various dimensions of robustness, across techniques, metrics, embedding, and benchmarks.

SoK: Certified Robustness for Deep Neural Networks

This paper provides a taxonomy for the robustness verification and training approaches, and provides an open-sourced unified platform to evaluate 20+ representative verification and corresponding robust training approaches on a wide range of DNNs.

References

SHOWING 1-10 OF 33 REFERENCES

Towards Stable and Efficient Training of Verifiably Robust Neural Networks

CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks, and outperform all previous linear relaxation and bound propagation based certified defenses in $\ell_\infty$ robustness.

Wide Residual Networks

This paper conducts a detailed experimental study on the architecture of ResNet blocks and proposes a novel architecture where the depth and width of residual networks are decreased and the resulting network structures are called wide residual networks (WRNs), which are far superior over their commonly used thin and very deep counterparts.

Efficient Neural Network Robustness Certification with General Activation Functions

This paper introduces CROWN, a general framework to certify robustness of neural networks with general activation functions for given input data points and facilitates the search for a tighter certified lower bound by adaptively selecting appropriate surrogates for each neuron activation.

Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond

This work develops an automatic framework to enable perturbation analysis on any neural network structures, by generalizing existing LiRPA algorithms such as CROWN to operate on general computational graphs and yields an open-source library for the community to applyLiRPA to areas beyond certified defense without much LiR PA expertise.

Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming

A first-order dual SDP algorithm that requires memory only linear in the total number of network activations, and only requires a fixed number of forward/backward passes through the network per iteration, enabling efficient use of hardware like GPUs/TPUs is proposed.

Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks

Two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function are proposed and combined with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.

Improving Adversarial Robustness Requires Revisiting Misclassified Examples

This paper proposes a new defense algorithm called MART, which explicitly differentiates the misclassified and correctly classified examples during the training, and shows that MART and its variant could significantly improve the state-of-the-art adversarial robustness.

Overfitting in adversarially robust deep learning

It is found that overfitting to the training set does in fact harm robust performance to a very large degree in adversarially robust training across multiple datasets (SVHN, CifAR-10, CIFAR-100, and ImageNet) and perturbation models.

Fastened CROWN: Tightened Neural Network Robustness Certificates

This work demonstrates the optimality of deterministic CROWN (Zhang et al. 2018) solutions in a given linear programming problem under mild constraints and proposes an optimization-based approach FROWN (Fastened CROWN): a general algorithm to tighten robustness certificates for neural networks.

Scalable Verified Training for Provably Robust Image Classification

This work shows how a simple bounding technique, interval bound propagation (IBP), can be exploited to train large provably robust neural networks that beat the state-of-the-art in verified accuracy and allows the largest model to be verified beyond vacuous bounds on a downscaled version of IMAGENET.