Minimal Multi-Layer Modifications of Deep Neural Networks

@inproceedings{Refaeli2021MinimalMM,
  title={Minimal Multi-Layer Modifications of Deep Neural Networks},
  author={Idan Refaeli and Guy Katz},
  booktitle={NSV/FoMLAS@CAV},
  year={2021}
}
Deep neural networks (DNNs) have become increasingly popular in recent years. However, despite their many successes, DNNs may also err and produce incorrect and potentially fatal outputs in safetycritical settings, such as autonomous driving, medical diagnosis, and airborne collision avoidance systems. Much work has been put into detecting such erroneous behavior in DNNs, e.g., via testing or verification, but removing these errors after their detection has received lesser attention. We present… 

Verification-Aided Deep Ensemble Selection

This case study harnesses recent advances in DNN verification to devise a methodology for identifying ensemble compositions that are less prone to simultaneous errors, even when the input is adversarially perturbed — resulting in more robustly-accurate ensemble-based classiﷁcation.

Neural Network Verification with Proof Production

This work presents a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to-check witness of unsatis fiability, which attests to the absence of errors.

An Abstraction-Refinement Approach to Verifying Convolutional Neural Networks

The core of Cnn-Abs is an abstraction-refinement technique, which simplifies the verification problem through the removal of convolutional connections in a way that soundly creates an over-approximation of the original problem; and which restores these connections if the resulting problem becomes too abstract.

Towards Formal Approximated Minimal Explanations of Neural Networks

This work considers this work as a step toward leveraging verification technology in producing DNNs that are more reliable and comprehensible, and recommends the use of bundles, which allows us to arrive at more succinct and interpretable explanations.

Efficient Adversarial Input Generation via Neural Net Patching

This work presents a novel technique to patch neural networks, and an innovative approach of using it to produce perturbations of inputs which are adversarial for the original net, which is more effective than the prior state-of-the-art techniques.

Verifying Learning-Based Robotic Navigation Systems

This work is the first to demonstrate the use of DNN verification backends for recognizing suboptimal DRL policies in real-world robots, and for filtering out unwanted policies.

Tighter Abstract Queries in Neural Network Verification

CEGARETTE is presented, a novel verification mechanism where both the system and the property are abstracted and re fined simultaneously, allowing for quick veri-cation times while avoiding a large number of reflnement steps.

veriFIRE: Verifying an Industrial, Learning-Based Wildfire Detection System

. In this short paper, we present our ongoing work on the veriFIRE project — a collaboration between industry and academia, aimed at using verification for increasing the reliability of a real-world,

References

SHOWING 1-10 OF 56 REFERENCES

Minimal Modifications of Deep Neural Networks using Verification

This work uses recent advances in DNN verification and proposes a technique for modifying a DNN according to certain requirements, in a way that is provably minimal, does not require any retraining, and is thus less likely to affect other aspects of the DNN’s behavior.

Pruning and Slicing Neural Networks using Formal Verification

  • O. LahavGuy Katz
  • Computer Science
    2021 Formal Methods in Computer Aided Design (FMCAD)
  • 2021
This work presents a framework and a methodology for discovering redundancies in DNNs — i.e., for finding neurons that are not needed, and can be removed in order to reduce the size of the DNN.

Provable repair of deep neural networks

The Provable Repair problem is introduced, which is the problem of repairing a network N to construct a new network N′ that satisfies a given specification, and the introduction of a Decoupled DNN architecture, which allows for provable repair to a linear programming problem.

Towards Scalable Verification of Deep Reinforcement Learning

This work presents the whiRL 2.0 tool, which implements a new approach for verifying complex properties of interest for DRL systems, and proposes techniques for performing k-induction and semi-automated invariant inference on such systems.

Testing Deep Neural Networks

This paper proposes a family of four novel test criteria that are tailored to structural features of DNNs and their semantics, and validated by demonstrating that the generated test inputs guided via the proposed coverage criteria are able to capture undesired behaviours in a DNN.

Safety Verification of Deep Neural Networks

A novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT) is developed, which defines safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image.

An Abstraction-Based Framework for Neural Network Verification

A framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network—thus making it more amenable to verification, and can be integrated with many existing verification techniques.

Guarded Deep Learning using Scenario-based Modeling

This work proposes to bring together DNNs and the well-studied scenario-based modeling paradigm, by expressing these override rules as simple and intuitive scenarios that can lead to override rules that are comprehensible to humans, but are also sufficiently expressive and powerful to increase the overall safety of the model.

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.

A Unified View of Piecewise Linear Neural Network Verification

A unified framework that encompasses previous methods is presented and the identification of new methods that combine the strengths of multiple existing approaches are identified, accomplishing a speedup of two orders of magnitude compared to the previous state of the art.
...