Pruning and Slicing Neural Networks using Formal Verification

  title={Pruning and Slicing Neural Networks using Formal Verification},
  author={Ori Lahav and Guy Katz},
  journal={2021 Formal Methods in Computer Aided Design (FMCAD)},
  • O. LahavGuy Katz
  • Published 28 May 2021
  • Computer Science
  • 2021 Formal Methods in Computer Aided Design (FMCAD)
Deep neural networks (DNNs) play an increasingly important role in various computer systems. In order to create these networks, engineers typically specify a desired topology, and then use an automated training algorithm to select the network’s weights. While training algorithms have been studied extensively and are well understood, the selection of topology remains a form of art, and can often result in networks that are unnecessarily large — and consequently are incompatible with end devices… 

On Optimizing Back-Substitution Methods for Neural Network Verification

An approach for making back-substitution produce tighter bounds, and can be integrated into numerous existing symbolic-bound propagation techniques, with only minor modifications.

Neural Network Verification with Proof Production

This work presents a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to-check witness of unsatis fiability, which attests to the absence of errors.

Tighter Abstract Queries in Neural Network Verification

CEGARETTE is presented, a novel verification mechanism where both the system and the property are abstracted and re fined simultaneously, allowing for quick veri-cation times while avoiding a large number of reflnement steps.

Minimal Multi-Layer Modifications of Deep Neural Networks

The novel repair procedure implemented in 3M-DNN computes a modification to the network’s weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend, black-box DNN verification engine.

Towards Formal Approximated Minimal Explanations of Neural Networks

This work considers this work as a step toward leveraging verification technology in producing DNNs that are more reliable and comprehensible, and recommends the use of bundles, which allows us to arrive at more succinct and interpretable explanations.

Towards Scalable Verification of Deep Reinforcement Learning

This work presents the whiRL 2.0 tool, which implements a new approach for verifying complex properties of interest for DRL systems, and proposes techniques for performing k-induction and semi-automated invariant inference on such systems.

CheckINN: Wide Range Neural Network Verification in Imandra

Imandra, a functional programming language and a theorem prover originally designed for verification, validation and simulation of financial infrastructure can offer a holistic infrastructure for neural network verification.

An Abstraction-Refinement Approach to Verifying Convolutional Neural Networks

The core of Cnn-Abs is an abstraction-refinement technique, which simplifies the verification problem through the removal of convolutional connections in a way that soundly creates an over-approximation of the original problem; and which restores these connections if the resulting problem becomes too abstract.

Verification-Aided Deep Ensemble Selection

This case study harnesses recent advances in DNN verification to devise a methodology for identifying ensemble compositions that are less prone to simultaneous errors, even when the input is adversarially perturbed — resulting in more robustly-accurate ensemble-based classiﷁcation.

Verifying learning-augmented systems

WhiRL is presented, a platform for verifying DRL policies for systems, which combines recent advances in the verification of deep neural networks with scalable model checking techniques, and is capable of guaranteeing that natural requirements from recently introduced learning-augmented systems are satisfied, and of exposing specific scenarios in which other basic requirements are not.



An Abstraction-Based Framework for Neural Network Verification

A framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network—thus making it more amenable to verification, and can be integrated with many existing verification techniques.

Minimal Modifications of Deep Neural Networks using Verification

This work uses recent advances in DNN verification and proposes a technique for modifying a DNN according to certain requirements, in a way that is provably minimal, does not require any retraining, and is thus less likely to affect other aspects of the DNN’s behavior.

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.

DeepAbstract: Neural Network Abstraction for Accelerating Verification

This work introduces an abstraction framework applicable to fully-connected feed-forward neural networks based on clustering of neurons that behave similarly on some inputs, and shows how the abstraction reduces the size of the network, while preserving its accuracy.

Simplifying Neural Networks Using Formal Verification

The work-flow of the simplification process is reported, and its potential significance and applicability is demonstrated on a family of real-world DNNs for aircraft collision avoidance, whose sizes the authors were able to reduce by as much as 10%.

A Unified View of Piecewise Linear Neural Network Verification

A unified framework that encompasses previous methods is presented and the identification of new methods that combine the strengths of multiple existing approaches are identified, accomplishing a speedup of two orders of magnitude compared to the previous state of the art.

Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks

An approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function and infers additional node phases for the non-linear nodes in the network from partial node phase assignments, similar to unit propagation in classical SAT solving.

Verifying Recurrent Neural Networks using Invariant Inference

This work proposes a novel approach for verifying properties of a widespread variant of neural networks, called recurrent Neural networks, based on the inference of invariants, which allows it to reduce the complex problem of verifying recurrent networks into simpler, non-recurrent problems.

Reluplex: a calculus for reasoning about deep neural networks

A novel, scalable, and efficient technique based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks.

Towards Scalable Verification of RL-Driven Systems

This work presents the whiRL 2.0 tool, which implements a new approach for verifying complex properties of interest for DRL systems, and proposes techniques for performing k-induction and automated invariant inference on such systems.