veriFIRE: Verifying an Industrial, Learning-Based Wildfire Detection System

@inproceedings{Amir2022veriFIREVA,
  title={veriFIRE: Verifying an Industrial, Learning-Based Wildfire Detection System},
  author={Guy Amir and Ziv Freund and Guy Katz and Elad Mandelbaum and Idan Refaeli},
  booktitle={World Congress on Formal Methods},
  year={2022}
}
In this short paper, we present our ongoing work on the veriFIRE project -- a collaboration between industry and academia, aimed at using verification for increasing the reliability of a real-world, safety-critical system. The system we target is an airborne platform for wildfire detection, which incorporates two deep neural networks. We describe the system and its properties of interest, and discuss our attempts to verify the system's consistency, i.e., its ability to continue and correctly… 
3 Citations

Figures from this paper

Verifying Generalization in Deep Learning

This work puts forth a novel objective for formal verification, with the potential for mitigating the risks associated with deploying DNN-based systems in the wild, and establishes the usefulness of the approach, and, in particular, its superiority over gradient-based methods.

Towards Formal XAI: Formally Approximate Minimal Explanations of Neural Networks

This work suggests an efficient, verification-based method for finding minimal explanations, which constitute a provable approximation of the global, minimum explanation, and proposes heuristics that significantly improve the scalability of the verification process.

Verifying Learning-Based Robotic Navigation Systems

This work is the first to demonstrate the use of DNN verification backends for recognizing suboptimal DRL policies in real-world robots, and for filtering out unwanted policies.

References

SHOWING 1-10 OF 50 REFERENCES

Toward Scalable Verification for Safety-Critical Deep Networks

The increasing use of deep neural networks for safety-critical applications, such as autonomous driving and flight control, raises concerns about their safety and reliability, so work on mitigating this difficulty is given, by developing scalable verification techniques and identifying design choices that result in deep learning systems that are more amenable to verification.

Neural Network Verification with Proof Production

This work presents a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to-check witness of unsatisfiability, which attests to the absence of errors.

Neural Network Verification using Residual Reasoning

This paper presents an enhancement to abstraction-based verification of neural networks, by using residual reasoning : the process of utilizing information acquired when verifying an abstract network, in order to expedite the veri⬂cation of a reflned network.

Minimal Multi-Layer Modifications of Deep Neural Networks

The novel repair procedure implemented in 3M-DNN computes a modification to the network’s weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend, black-box DNN verification engine.

An Abstraction-Based Framework for Neural Network Verification

A framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network—thus making it more amenable to verification, and can be integrated with many existing verification techniques.

Neural Network Robustness as a Verification Property: A Principled Case Study

This paper sets up general principles for the empirical analysis and evaluation of a network’s robustness as a mathematical property — during the network's training phase, its verification, and after its deployment.

An SMT-Based Approach for Verifying Binarized Neural Networks

Various optimizations are proposed, integrated into the authors' SMT procedure as deduction steps, as well as an approach for parallelizing verification queries, for verifying binarized neural networks.

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.

DeepSafe: A Data-Driven Approach for Assessing Robustness of Neural Networks

This work proposes DeepSafe, a novel approach for automatically assessing the overall robustness of a neural network, which applies clustering over known labeled data and leverages off-the-shelf constraint solvers to automatically identify and check safe regions in which the network is robust.

Towards Scalable Verification of Deep Reinforcement Learning

This work presents the whiRL 2.0 tool, which implements a new approach for verifying complex properties of interest for DRL systems, and proposes techniques for performing k-induction and semi-automated invariant inference on such systems.