veriFIRE: Verifying an Industrial, Learning-Based Wildfire Detection System
@inproceedings{Amir2022veriFIREVA, title={veriFIRE: Verifying an Industrial, Learning-Based Wildfire Detection System}, author={Guy Amir and Ziv Freund and Guy Katz and Elad Mandelbaum and Idan Refaeli}, booktitle={World Congress on Formal Methods}, year={2022} }
In this short paper, we present our ongoing work on the veriFIRE project -- a collaboration between industry and academia, aimed at using verification for increasing the reliability of a real-world, safety-critical system. The system we target is an airborne platform for wildfire detection, which incorporates two deep neural networks. We describe the system and its properties of interest, and discuss our attempts to verify the system's consistency, i.e., its ability to continue and correctly…
3 Citations
Verifying Generalization in Deep Learning
- Computer ScienceArXiv
- 2023
This work puts forth a novel objective for formal verification, with the potential for mitigating the risks associated with deploying DNN-based systems in the wild, and establishes the usefulness of the approach, and, in particular, its superiority over gradient-based methods.
Towards Formal XAI: Formally Approximate Minimal Explanations of Neural Networks
- Computer Science
- 2022
This work suggests an efficient, verification-based method for finding minimal explanations, which constitute a provable approximation of the global, minimum explanation, and proposes heuristics that significantly improve the scalability of the verification process.
Verifying Learning-Based Robotic Navigation Systems
- Computer ScienceArXiv
- 2022
This work is the first to demonstrate the use of DNN verification backends for recognizing suboptimal DRL policies in real-world robots, and for filtering out unwanted policies.
References
SHOWING 1-10 OF 50 REFERENCES
Toward Scalable Verification for Safety-Critical Deep Networks
- Computer ScienceArXiv
- 2018
The increasing use of deep neural networks for safety-critical applications, such as autonomous driving and flight control, raises concerns about their safety and reliability, so work on mitigating this difficulty is given, by developing scalable verification techniques and identifying design choices that result in deep learning systems that are more amenable to verification.
Neural Network Verification with Proof Production
- Computer Science2022 Formal Methods in Computer-Aided Design (FMCAD)
- 2022
This work presents a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to-check witness of unsatisfiability, which attests to the absence of errors.
Neural Network Verification using Residual Reasoning
- Computer ScienceSEFM
- 2022
This paper presents an enhancement to abstraction-based verification of neural networks, by using residual reasoning : the process of utilizing information acquired when verifying an abstract network, in order to expedite the veri⬂cation of a reflned network.
Minimal Multi-Layer Modifications of Deep Neural Networks
- Computer ScienceNSV/FoMLAS@CAV
- 2022
The novel repair procedure implemented in 3M-DNN computes a modification to the network’s weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend, black-box DNN verification engine.
An Abstraction-Based Framework for Neural Network Verification
- Computer ScienceCAV
- 2020
A framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network—thus making it more amenable to verification, and can be integrated with many existing verification techniques.
Neural Network Robustness as a Verification Property: A Principled Case Study
- Computer ScienceCAV
- 2022
This paper sets up general principles for the empirical analysis and evaluation of a network’s robustness as a mathematical property — during the network's training phase, its verification, and after its deployment.
An SMT-Based Approach for Verifying Binarized Neural Networks
- Computer ScienceTACAS
- 2021
Various optimizations are proposed, integrated into the authors' SMT procedure as deduction steps, as well as an approach for parallelizing verification queries, for verifying binarized neural networks.
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
- Computer ScienceCAV
- 2017
Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.
DeepSafe: A Data-Driven Approach for Assessing Robustness of Neural Networks
- Computer ScienceATVA
- 2018
This work proposes DeepSafe, a novel approach for automatically assessing the overall robustness of a neural network, which applies clustering over known labeled data and leverages off-the-shelf constraint solvers to automatically identify and check safe regions in which the network is robust.
Towards Scalable Verification of Deep Reinforcement Learning
- Computer Science2021 Formal Methods in Computer Aided Design (FMCAD)
- 2021
This work presents the whiRL 2.0 tool, which implements a new approach for verifying complex properties of interest for DRL systems, and proposes techniques for performing k-induction and semi-automated invariant inference on such systems.