Neural Network Verification with Proof Production

@article{Isac2022NeuralNV,
  title={Neural Network Verification with Proof Production},
  author={Omri Isac and Clark Barrett and M. Zhang and Guy Katz},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.00512}
}
—Deep neural networks (DNNs) are increasingly being employed in safety-critical systems, and there is an urgent need to guarantee their correctness. Consequently, the verification com- munity has devised multiple techniques and tools for verifying DNNs. When DNN verifiers discover an input that triggers an error, that is easy to confirm; but when they report that no error exists, there is no way to ensure that the verification tool itself is not flawed. As multiple errors have already been observed… 

Figures from this paper

Tighter Abstract Queries in Neural Network Verification

CEGARETTE is presented, a novel verification mechanism where both the system and the property are abstracted and re fined simultaneously, allowing for quick veri-cation times while avoiding a large number of reflnement steps.

Towards Formal Approximated Minimal Explanations of Neural Networks

This work considers this work as a step toward leveraging verification technology in producing DNNs that are more reliable and comprehensible, and recommends the use of bundles, which allows us to arrive at more succinct and interpretable explanations.

veriFIRE: Verifying an Industrial, Learning-Based Wildfire Detection System

. In this short paper, we present our ongoing work on the veriFIRE project — a collaboration between industry and academia, aimed at using verification for increasing the reliability of a real-world,

Verification-Aided Deep Ensemble Selection

This case study harnesses recent advances in DNN verification to devise a methodology for identifying ensemble compositions that are less prone to simultaneous errors, even when the input is adversarially perturbed — resulting in more robustly-accurate ensemble-based classiﷁcation.

Verifying Learning-Based Robotic Navigation Systems

This work is the first to demonstrate the use of DNN verification backends for recognizing suboptimal DRL policies in real-world robots, and for filtering out unwanted policies.

First Three Years of the International Verification of Neural Networks Competition (VNN-COMP)

The key processes, rules, and results, present trends observed over the last three years, and provide an outlook into possible future developments are presented.

References

SHOWING 1-10 OF 68 REFERENCES

Neural Network Verification using Residual Reasoning

This paper presents an enhancement to abstraction-based verification of neural networks, by using residual reasoning : the process of utilizing information acquired when verifying an abstract network, in order to expedite the veri⬂cation of a reflned network.

An Abstraction-Based Framework for Neural Network Verification

A framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network—thus making it more amenable to verification, and can be integrated with many existing verification techniques.

Pruning and Slicing Neural Networks using Formal Verification

  • O. LahavGuy Katz
  • Computer Science
    2021 Formal Methods in Computer Aided Design (FMCAD)
  • 2021
This work presents a framework and a methodology for discovering redundancies in DNNs — i.e., for finding neurons that are not needed, and can be removed in order to reduce the size of the DNN.

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.

Minimal Modifications of Deep Neural Networks using Verification

This work uses recent advances in DNN verification and proposes a technique for modifying a DNN according to certain requirements, in a way that is provably minimal, does not require any retraining, and is thus less likely to affect other aspects of the DNN’s behavior.

An SMT-Based Approach for Verifying Binarized Neural Networks

Various optimizations are proposed, integrated into the authors' SMT procedure as deduction steps, as well as an approach for parallelizing verification queries, for verifying binarized neural networks.

Minimal Multi-Layer Modifications of Deep Neural Networks

The novel repair procedure implemented in 3M-DNN computes a modification to the network’s weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend, black-box DNN verification engine.

Formal Security Analysis of Neural Networks using Symbolic Intervals

This paper designs, implements, and evaluates a new direction for formally checking security properties of DNNs without using SMT solvers, and leverages interval arithmetic to compute rigorous bounds on the DNN outputs, which is easily parallelizable.

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation

This work presents AI2, the first sound and scalable analyzer for deep neural networks, and introduces abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling layers.

Verifying Recurrent Neural Networks using Invariant Inference

This work proposes a novel approach for verifying properties of a widespread variant of neural networks, called recurrent Neural networks, based on the inference of invariants, which allows it to reduce the complex problem of verifying recurrent networks into simpler, non-recurrent problems.
...