• Corpus ID: 53112003

On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models

@article{Gowal2018OnTE,
title={On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models},
author={Sven Gowal and Krishnamurthy Dvijotham and Robert Stanforth and Rudy Bunel and Chongli Qin and Jonathan Uesato and Relja Arandjelovi{\'c} and Timothy A. Mann and Pushmeet Kohli},
journal={ArXiv},
year={2018},
volume={abs/1810.12715}
}
• Published 30 October 2018
• Computer Science
• ArXiv
Recent work has shown that it is possible to train deep neural networks that are provably robust to norm-bounded adversarial perturbations. Most of these methods are based on minimizing an upper bound on the worst-case loss over all possible adversarial perturbations. While these techniques show promise, they often result in difficult optimization procedures that remain hard to scale to larger networks. Through a comprehensive analysis, we show how a simple bounding technique, interval bound…
342 Citations

Figures and Tables from this paper

Expected Tight Bounds for Robust Training

• Computer Science
• 2019
Expected tight bounds are proposed, referred to as ETB, which are provably tighter than IBP bounds in expectation and can be extended to deeper networks through blockwise propagation and show that they can achieve orders of magnitudes tighter bounds compared to IBP.

Probabilistically True and Tight Bounds for Robust Deep Neural Network Training

• Computer Science
ArXiv
• 2019
With such tight bounds, it is demonstrated that a simple standard training procedure can achieve the best robustness-accuracy trade-off across several architectures on both MNIST and CIFAR10.

VERIFIABLY ROBUST NEURAL NETWORKS

• Computer Science
• 2019
This paper proposes a new certified adversarial training method,CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding passed, which is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks.

Towards Stable and Efficient Training of Verifiably Robust Neural Networks

• Computer Science
ICLR
• 2020
CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks, and outperform all previous linear relaxation and bound propagation based certified defenses in $\ell_\infty$ robustness.

Fast and Stable Interval Bounds Propagation for Training Verifiably Robust Models

• Computer Science
ESANN
• 2020
An efficient technique is presented, which allows to train classification networks which are verifiably robust against norm-bounded adversarial attacks and is not so sensitive to the exact specification of the training process, which makes it easier to use by practitioners.

Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples

• Computer Science
NeurIPS
• 2021
This work designs a new certiﬁable training method that achieves a decent performance under a wide range of perturbations, while others with only one of the two factors can perform well only for a speci⬁c range ofperturbations.

IBP Regularization for Verified Adversarial Robustness via Branch-and-Bound

• Computer Science
ArXiv
• 2022
It is shown that IBP-R obtains state-of-the-art veriﬁed robustness-accuracy trade-offs for small perturbations on CIFAR-10 while training faster than relevant previous work.

Robustness Certificates Against Adversarial Examples for ReLU Networks

• Computer Science
ArXiv
• 2019
This paper proposes attack-agnostic robustness certificates for a multi-label classification problem using a deep ReLU network that has a closed-form, is differentiable and is an order of magnitude faster to compute than the existing methods even for deep networks.

On Pruning Adversarially Robust Neural Networks

• Computer Science
ArXiv
• 2020
It is shown that integrating existing pruning techniques with multiple types of robust training techniques, including verifiably robust training, leads to poor robust accuracy even though such techniques can preserve high regular accuracy.

Adversarial Training and Provable Defenses: Bridging the Gap

• Computer Science
ICLR
• 2020
This work proposes a new method to train neural networks based on a novel combination of adversarial training and provable defenses which produces a model with state-of-the-art accuracy and certified robustness on the challenging CIFAR-10 dataset with a 2/255 L-infinity perturbation.

References

SHOWING 1-10 OF 33 REFERENCES

• Computer Science
ICML
• 2018
A method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations, and it is shown that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss.

• Computer Science
NeurIPS
• 2018
This paper presents a technique for extending these training procedures to much more general networks, with skip connections and general nonlinearities, and shows how to further improve robust error through cascade models.

Evaluating Robustness of Neural Networks with Mixed Integer Programming

• Computer Science
ICLR
• 2019
Verification of piecewise-linear neural networks as a mixed integer program that is able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack for every network.

Training verified learners with learned verifiers

• Computer Science
ArXiv
• 2018
Experiments show that the predictor-verifier architecture able to train networks to achieve state of the art verified robustness to adversarial examples with much shorter training times can be scaled to produce the first known verifiably robust networks for CIFAR-10.

Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability

• Computer Science
ICLR
• 2019
It is demonstrated that improving weight sparsity alone already enables us to turn computationally intractable verification problems into tractable ones and improving ReLU stability leads to an additional 4-13x speedup in verification times.

A Dual Approach to Scalable Verification of Deep Networks

• Computer Science
UAI
• 2018
This paper addresses the problem of formally verifying desirable properties of neural networks by forming verification as an optimization problem and solving a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified.

Towards Fast Computation of Certified Robustness for ReLU Networks

• Computer Science
ICML
• 2018
It is shown that, in fact, there is no polynomial time algorithm that can approximately find the minimum adversarial distortion of a ReLU network with a $0.99\ln n$ approximation ratio unless $\mathsf{NP}$=$\ mathsf{P}$, where $n$ is the number of neurons in the network.

MixTrain: Scalable Training of Formally Robust Neural Networks

• Computer Science
ArXiv
• 2018
Stochastic robust approximation and dynamic mixed training are proposed to drastically improve the efficiency of verifiably robust training without sacrificing verified robustness, and MixTrain can achieve up to 95.2% verified robust accuracy against norm-bounded attackers.

Towards Evaluating the Robustness of Neural Networks

• Computer Science
2017 IEEE Symposium on Security and Privacy (SP)
• 2017
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.

Verifying Neural Networks with Mixed Integer Programming

• Computer Science
ArXiv
• 2017
It is demonstrated that, for networks that are piecewise affine (for example, deep networks with ReLU and maxpool units), proving no adversarial example exists can be naturally formulated as solving a mixed integer program.