Shared Certificates for Neural Network Verification

@inproceedings{Sprecher2022SharedCF,
  title={Shared Certificates for Neural Network Verification},
  author={Christian Sprecher and Marc Fischer and Dimitar I. Dimitrov and Gagandeep Singh and Martin T. Vechev},
  booktitle={International Conference on Computer Aided Verification},
  year={2022}
}
Existing neural network verifiers compute a proof that each input is handled correctly under a given perturbation by propagating a convex set of reachable values at each layer. This process is repeated independently for each input (e.g., image) and perturbation (e.g., rotation), leading to an expensive overall proof effort when handling an entire dataset. In this work we introduce a new method for reducing this verification cost based on the key insight that convex sets obtained at intermediate… 

PdF: Modular verification of neural networks

  • Computer Science
  • 2022
Although the verification problem for ReLU-NNs is trivially decidable by enumerating all affine regions, it is unfortunately NP-complete [6].

References

SHOWING 1-10 OF 47 REFERENCES

A Dual Approach to Scalable Verification of Deep Networks

This paper addresses the problem of formally verifying desirable properties of neural networks by forming verification as an optimization problem and solving a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified.

An abstract domain for certifying neural networks

This work proposes a new abstract domain which combines floating point polyhedra with intervals and is equipped with abstract transformers specifically tailored to the setting of neural networks, and introduces new transformers for affine transforms, the rectified linear unit, sigmoid, tanh, and maxpool functions.

Efficient Neural Network Verification with Exactness Characterization

This work develops the first known sufficient conditions under which a polynomial time verification algorithm is guaranteed to perform exact verification of neural networks, and can be implemented using primitives available readily in common deep learning frameworks.

DeepAbstract: Neural Network Abstraction for Accelerating Verification

This work introduces an abstraction framework applicable to fully-connected feed-forward neural networks based on clustering of neurons that behave similarly on some inputs, and shows how the abstraction reduces the size of the network, while preserving its accuracy.

Efficient Neural Network Robustness Certification with General Activation Functions

This paper introduces CROWN, a general framework to certify robustness of neural networks with general activation functions for given input data points and facilitates the search for a tighter certified lower bound by adaptively selecting appropriate surrogates for each neuron activation.

Continuous Safety Verification of Neural Networks

This paper considers approaches to transfer results established in the previous DNN safety verification problem to the modified problem setting and develops several sufficient conditions that only require formally analyzing a small part of the DNN in the new problem.

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation

This work presents AI2, the first sound and scalable analyzer for deep neural networks, and introduces abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling layers.

A Unified View of Piecewise Linear Neural Network Verification

A unified framework that encompasses previous methods is presented and the identification of new methods that combine the strengths of multiple existing approaches are identified, accomplishing a speedup of two orders of magnitude compared to the previous state of the art.

ReluDiff: Differential Verification of Deep Neural Networks

This work develops a new method for differential verification of two closely related networks that can achieve orders-of-magnitude speedup and prove many more properties than existing tools.

Certifying Geometric Robustness of Neural Networks

A new method to compute sound and asymptotically optimal linear relaxations for any composition of transformations that certifies significantly more complex geometric transformations than existing methods on both defended and undefended networks while scaling to large architectures is proposed.