Reluplex: a calculus for reasoning about deep neural networks

@article{Katz2021ReluplexAC,
  title={Reluplex: a calculus for reasoning about deep neural networks},
  author={Guy Katz and Clark W. Barrett and David L. Dill and Kyle D. Julian and Mykel J. Kochenderfer},
  journal={Formal Methods in System Design},
  year={2021}
}
Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit… 

On Optimizing Back-Substitution Methods for Neural Network Verification

An approach for making back-substitution produce tighter bounds, and can be integrated into numerous existing symbolic-bound propagation techniques, with only minor modifications.

Towards Scalable Verification of Deep Reinforcement Learning

This work presents the whiRL 2.0 tool, which implements a new approach for verifying complex properties of interest for DRL systems, and proposes techniques for performing k-induction and semi-automated invariant inference on such systems.

Towards Scalable Verification of RL-Driven Systems

This work presents the whiRL 2.0 tool, which implements a new approach for verifying complex properties of interest for DRL systems, and proposes techniques for performing k-induction and automated invariant inference on such systems.

Pruning and Slicing Neural Networks using Formal Verification

  • O. LahavGuy Katz
  • Computer Science
    2021 Formal Methods in Computer Aided Design (FMCAD)
  • 2021
This work presents a framework and a methodology for discovering redundancies in DNNs — i.e., for finding neurons that are not needed, and can be removed in order to reduce the size of the DNN.

Reachability In Simple Neural Networks

It is shown that NP-hardness already holds for restricted classes of simple specifications and neural networks, allowing for a single hidden layer and an output dimension of one as well as neural networks with just one negative, zero and one positive weight or bias to ensure NP- hardness.

Neural Network Verification with Proof Production

This work presents a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to-check witness of unsatis fiability, which attests to the absence of errors.

Towards Formal Approximated Minimal Explanations of Neural Networks

This work considers this work as a step toward leveraging verification technology in producing DNNs that are more reliable and comprehensible, and recommends the use of bundles, which allows us to arrive at more succinct and interpretable explanations.

Minimal Multi-Layer Modifications of Deep Neural Networks

The novel repair procedure implemented in 3M-DNN computes a modification to the network’s weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend, black-box DNN verification engine.

Verifying learning-augmented systems

WhiRL is presented, a platform for verifying DRL policies for systems, which combines recent advances in the verification of deep neural networks with scalable model checking techniques, and is capable of guaranteeing that natural requirements from recently introduced learning-augmented systems are satisfied, and of exposing specific scenarios in which other basic requirements are not.

PdF: Modular verification of neural networks

  • Computer Science
  • 2022
Although the verification problem for ReLU-NNs is trivially decidable by enumerating all affine regions, it is unfortunately NP-complete [6].

References

SHOWING 1-10 OF 75 REFERENCES

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.

An abstract domain for certifying neural networks

This work proposes a new abstract domain which combines floating point polyhedra with intervals and is equipped with abstract transformers specifically tailored to the setting of neural networks, and introduces new transformers for affine transforms, the rectified linear unit, sigmoid, tanh, and maxpool functions.

Verifying Recurrent Neural Networks using Invariant Inference

This work proposes a novel approach for verifying properties of a widespread variant of neural networks, called recurrent Neural networks, based on the inference of invariants, which allows it to reduce the complex problem of verifying recurrent networks into simpler, non-recurrent problems.

Output Range Analysis for Deep Feedforward Neural Networks

An efficient range estimation algorithm that iterates between an expensive global combinatorial search using mixed-integer linear programming problems, and a relatively inexpensive local optimization that repeatedly seeks a local optimum of the function represented by the NN is presented.

An Abstraction-Based Framework for Neural Network Verification

A framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network—thus making it more amenable to verification, and can be integrated with many existing verification techniques.

Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks

An approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function and infers additional node phases for the non-linear nodes in the network from partial node phase assignments, similar to unit propagation in classical SAT solving.

Algorithms for Verifying Deep Neural Networks

This article surveys methods that have emerged recently for soundly verifying whether a particular network satisfies certain input-output properties and provides pedagogical implementations of existing methods and compare them on a set of benchmark problems.

Art: Abstraction Refinement-Guided Training for Provably Correct Neural Networks

This paper presents a novel learning framework that ensures formal guarantees of general safety properties of artificial Neural Networks are enforced by construction, and empirically demonstrates that realizing safety does not come at the price of much accuracy.

Output Range Analysis for Deep Neural Networks

This paper presents an efficient range estimation algorithm that uses a combination of local search and linear programming problems to efficiently find the maximum and minimum values taken by the outputs of the NN over the given input set and demonstrates the effectiveness of the proposed approach for verification of NNs used in automated control as well as those used in classification.

Piecewise Linear Neural Network verification: A comparative study

Motivated by the need of accelerating progress in this very important area, a number of different approaches based on Mixed Integer Programming, Satisfiability Modulo Theory, as well as a novel method based on the Branch-and-Bound framework are investigated.
...