• Corpus ID: 240354214

Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Robustness Verification

@inproceedings{Wang2021BetaCROWNEB,
  title={Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Robustness Verification},
  author={Shiqi Wang and Huan Zhang and Kaidi Xu and Xue Lin and Suman Sekhar Jana and Cho-Jui Hsieh and J. Zico Kolter},
  year={2021}
}
Bound propagation based incomplete neural network verifiers such as CROWN are very efficient and can significantly accelerate branch-and-bound (BaB) based complete verification of neural networks. However, bound propagation cannot fully handle the neuron split constraints introduced by BaB commonly handled by expensive linear programming (LP) solvers, leading to loose bounds and hurting verification efficiency. In this work, we develop β-CROWN, a new bound propagation based method that can… 

Figures and Tables from this paper

Safety Guarantees for Neural Network Dynamic Systems via Stochastic Barrier Functions
TLDR
A method of providing safety guarantees for NNDMs is introduced based on stochastic barrier functions, whose relation with safety are analogous to that of Lyapunov functions with stability, and the convexity property of the barrier function is exploited to formulate the optimal control synthesis problem as a linear program.

References

SHOWING 1-10 OF 53 REFERENCES
Improved Branch and Bound for Neural Network Verification via Lagrangian Decomposition
TLDR
The scalability of Branch and Bound algorithms for formally proving input-output properties of neural networks is improved, and a novel activation-based branching strategy is presented, which greatly reduces the size of the BaB tree compared to previous heuristic methods.
Lagrangian Decomposition for Neural Network Verification
TLDR
A novel approach based on Lagrangian Decomposition is proposed, which admits an efficient supergradient ascent algorithm, as well as an improved proximal algorithm that results in an overall speed-up when employing the bounds for formal verification.
Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers
TLDR
This paper proposes to use the backward mode linear relaxation based perturbation analysis (LiRPA) to replace LP during the BaB process, which can be efficiently implemented on the typical machine learning accelerators such as GPUs and TPUs and demonstrates an order of magnitude speedup compared to existing LP-based approaches.
The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification
TLDR
This work improves the effectiveness of propagation- and linear-optimization-based neural network verification algorithms with a new tightened convex relaxation for ReLU neurons that considers the multivariate input space of the affine pre-activation function preceding the ReLU.
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
TLDR
CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks, and outperform all previous linear relaxation and bound propagation based certified defenses in $\ell_\infty$ robustness.
Branch and Bound for Piecewise Linear Neural Network Verification
TLDR
A family of algorithms based on Branch-and-Bound (BaB), which identifies new methods that combine the strengths of multiple existing approaches, accomplishing significant performance improvements over previous state of the art and introduces an effective branching strategy on ReLU non-linearities.
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks
TLDR
This paper unify all existing LP-relaxed verifiers, to the best of the knowledge, under a general convex relaxation framework, which works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of robustness verification.
Strong convex relaxations and mixed-integer programming formulations for trained neural networks
TLDR
This work provides convex relaxations for networks with many of the most popular nonlinear operations that are strictly stronger than other approaches from the literature, and corroborates this computationally on image classification verification tasks on the MNIST digit data set.
Neural Network Branching for Neural Network Verification
TLDR
This work proposes a novel framework for designing an effective branching strategy for BaB, and learns a graph neural network (GNN) to imitate the strong branching heuristic behaviour.
Precise Multi-Neuron Abstractions for Neural Network Certification
TLDR
The results show that PRIMA is significantly more precise than the state-of-the-art, verifying robustness for up to 16, 30%, and 34% more images than prior work on ReLU-, Sigmoid-, and Tanh-based networks, respectively.
...
...