• Corpus ID: 238744365

# Boosting the Certified Robustness of L-infinity Distance Nets

@article{Zhang2021BoostingTC,
title={Boosting the Certified Robustness of L-infinity Distance Nets},
author={Bohang Zhang and Du Jiang and Di He and Liwei Wang},
journal={ArXiv},
year={2021},
volume={abs/2110.06850}
}
Recently, Zhang et al. (2021) developed a new neural network architecture based on ∞-distance functions, which naturally possesses certified ∞ robustness by its construction. Despite rigorous theoretical guarantees, the model so far can only achieve comparable performance to conventional networks. In this paper, we make the following two contributions: (i) We demonstrate that `∞-distance nets enjoy a fundamental advantage in certified robustness over conventional networks (under typical…
1 Citations

## Figures and Tables from this paper

SoK: Certified Robustness for Deep Neural Networks
• Computer Science, Mathematics
ArXiv
• 2020
This paper provides a taxonomy for the robustness verification and training approaches, and provides an open-sourced unified platform to evaluate 20+ representative verification and corresponding robust training approaches on a wide range of DNNs.

## References

SHOWING 1-10 OF 67 REFERENCES
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
• Computer Science, Mathematics
ICLR
• 2020
CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks, and outperform all previous linear relaxation and bound propagation based certified defenses in $\ell_\infty$ robustness.
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability
• Computer Science, Mathematics
ICLR
• 2019
It is demonstrated that improving weight sparsity alone already enables us to turn computationally intractable verification problems into tractable ones and improving ReLU stability leads to an additional 4-13x speedup in verification times.
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
This work shows how a simple bounding technique, interval bound propagation (IBP), can be exploited to train large provably robust neural networks that beat the state-of-the-art in verified accuracy and allows the largest model to be verified beyond vacuous bounds on a downscaled version of ImageNet.
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks
• Computer Science, Mathematics
NeurIPS
• 2019
This paper unify all existing LP-relaxed verifiers, to the best of the knowledge, under a general convex relaxation framework, which works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of robustness verification.
Skew Orthogonal Convolutions
• Computer Science
ICML
• 2021
SOC allows us to train provably Lipschitz, large convolutional neural networks significantly faster than prior works while achieving significant improvements for both standard and certified robust accuracies.
A Dual Approach to Scalable Verification of Deep Networks
• Computer Science, Mathematics
UAI
• 2018
This paper addresses the problem of formally verifying desirable properties of neural networks by forming verification as an optimization problem and solving a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified.
Provable Robustness of ReLU networks via Maximization of Linear Regions
• Computer Science, Mathematics
AISTATS
• 2019
A regularization scheme for ReLU networks is proposed which provably improves the robustness of the classifier by maximizing the linear regions of theclassifier as well as the distance to the decision boundary.
L2-Nonexpansive Neural Networks
• Computer Science
ICLR
• 2019
Without needing any adversarial training, the proposed classifiers exceed the state of the art in robustness against white-box L2-bounded adversarial attacks and generalize better than ordinary networks from noisy data with partially random labels.
Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks
• Computer Science, Mathematics
NeurIPS
• 2018
From the relationship between the Lipschitz constants and prediction margins, a computationally efficient calculation technique is presented to lower-bound the size of adversarial perturbations that can deceive networks, and that is widely applicable to various complicated networks.
Efficient Neural Network Robustness Certification with General Activation Functions
• Computer Science, Mathematics
NeurIPS
• 2018
This paper introduces CROWN, a general framework to certify robustness of neural networks with general activation functions for given input data points and facilitates the search for a tighter certified lower bound by adaptively selecting appropriate surrogates for each neuron activation.