On Optimizing Back-Substitution Methods for Neural Network Verification

@article{Zelazny2022OnOB,
  title={On Optimizing Back-Substitution Methods for Neural Network Verification},
  author={Tom Zelazny and Haoze Wu and Clark Barrett and Guy Katz},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.07669}
}
—With the increasing application of deep learning in mission-critical systems, there is a growing need to obtain formal guarantees about the behaviors of neural networks. Indeed, many approaches for verifying neural networks have been recently proposed, but these generally struggle with limited scalability or insufficient accuracy. A key component in many state-of-the-art verification schemes is computing lower and upper bounds on the values that neurons in the network can obtain for a specific… 

Figures and Tables from this paper

Scalable verification of GNN-based job schedulers

This work develops vegas, the first general framework for verifying both single-step and multi-step properties of GNN-based job schedulers based on carefully designed algorithms that combine abstractions, refinements, solvers, and proof transfer.

VeriX: Towards Verified Explainability of Deep Neural Networks

V ERI X, a system for producing optimal robust explanations and generating counterfactuals along decision boundaries of machine learning models iteratively using constraint solving techniques and a heuristic based on feature-level sensitivity ranking is presented.

References

SHOWING 1-10 OF 65 REFERENCES

An Abstraction-Based Framework for Neural Network Verification

A framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network—thus making it more amenable to verification, and can be integrated with many existing verification techniques.

Pruning and Slicing Neural Networks using Formal Verification

  • O. LahavGuy Katz
  • Computer Science
    2021 Formal Methods in Computer Aided Design (FMCAD)
  • 2021
This work presents a framework and a methodology for discovering redundancies in DNNs — i.e., for finding neurons that are not needed, and can be removed in order to reduce the size of the DNN.

A Unified View of Piecewise Linear Neural Network Verification

A unified framework that encompasses previous methods is presented and the identification of new methods that combine the strengths of multiple existing approaches are identified, accomplishing a speedup of two orders of magnitude compared to the previous state of the art.

Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks

An approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function and infers additional node phases for the non-linear nodes in the network from partial node phase assignments, similar to unit propagation in classical SAT solving.

Neural Network Verification using Residual Reasoning

This paper presents an enhancement to abstraction-based verification of neural networks, by using residual reasoning : the process of utilizing information acquired when verifying an abstract network, in order to expedite the veri⬂cation of a reflned network.

DeepAbstract: Neural Network Abstraction for Accelerating Verification

This work introduces an abstraction framework applicable to fully-connected feed-forward neural networks based on clustering of neurons that behave similarly on some inputs, and shows how the abstraction reduces the size of the network, while preserving its accuracy.

Neural Networks, Secure by Construction - An Exploration of Refinement Types

StarChild and Lazuli are presented, two libraries which leverage refinement types to verify neural networks, implemented in F∗ and Liquid Haskell and show that SMT solvers do not scale to the sizes required for neural network verification.

Minimal Modifications of Deep Neural Networks using Verification

This work uses recent advances in DNN verification and proposes a technique for modifying a DNN according to certain requirements, in a way that is provably minimal, does not require any retraining, and is thus less likely to affect other aspects of the DNN’s behavior.

Towards Scalable Verification of Deep Reinforcement Learning

This work presents the whiRL 2.0 tool, which implements a new approach for verifying complex properties of interest for DRL systems, and proposes techniques for performing k-induction and semi-automated invariant inference on such systems.

Reluplex: a calculus for reasoning about deep neural networks

A novel, scalable, and efficient technique based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks.
...