Verification-Aided Deep Ensemble Selection
@article{Amir2022VerificationAidedDE, title={Verification-Aided Deep Ensemble Selection}, author={Guy Amir and Guy Katz and Michael Schapira}, journal={2022 Formal Methods in Computer-Aided Design (FMCAD)}, year={2022}, pages={27-37} }
Deep neural networks (DNNs) have become the technology of choice for realizing a variety of complex tasks. However, as highlighted by many recent studies, even an imperceptible perturbation to a correctly classified input can lead to misclassification by a DNN. This renders DNNs vulnerable to strategic input manipulations by attackers, and also over-sensitive to environmental noise. To mitigate this phenomenon, practitioners apply joint classification by an ensemble of DNNs. By aggregating the…
Figures and Tables from this paper
8 Citations
Verifying Generalization in Deep Learning
- Computer ScienceArXiv
- 2023
This work puts forth a novel objective for formal verification, with the potential for mitigating the risks associated with deploying DNN-based systems in the wild, and establishes the usefulness of the approach, and, in particular, its superiority over gradient-based methods.
veriFIRE: Verifying an Industrial, Learning-Based Wildfire Detection System
- Computer Science, Environmental ScienceFM
- 2023
The ongoing work on the veriFIRE project -- a collaboration between industry and academia, aimed at using verification for increasing the reliability of a real-world, safety-critical system, is presented.
Towards Formal Approximated Minimal Explanations of Neural Networks
- Computer ScienceArXiv
- 2022
This work considers this work as a step toward leveraging verification technology in producing DNNs that are more reliable and comprehensible, and recommends the use of bundles, which allows us to arrive at more succinct and interpretable explanations.
Towards Formal XAI: Formally Approximate Minimal Explanations of Neural Networks
- Computer Science
- 2022
This work suggests an efficient, verification-based method for finding minimal explanations, which constitute a provable approximation of the global, minimum explanation, and proposes heuristics that significantly improve the scalability of the verification process.
Tighter Abstract Queries in Neural Network Verification
- Computer ScienceArXiv
- 2022
CEGARETTE is presented, a novel verification mechanism where both the system and the property are abstracted and re fined simultaneously, allowing for quick veri-cation times while avoiding a large number of reflnement steps.
On Optimizing Back-Substitution Methods for Neural Network Verification
- Computer Science2022 Formal Methods in Computer-Aided Design (FMCAD)
- 2022
This paper presents an approach for making back-substitution produce tighter bounds, and formulate and then minimize the imprecision errors incurred during back- Substitution.
Neural Network Verification with Proof Production
- Computer Science2022 Formal Methods in Computer-Aided Design (FMCAD)
- 2022
This work presents a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to-check witness of unsatisfiability, which attests to the absence of errors.
Verifying Learning-Based Robotic Navigation Systems
- Computer ScienceArXiv
- 2022
This work is the first to demonstrate the use of DNN verification backends for recognizing suboptimal DRL policies in real-world robots, and for filtering out unwanted policies.
References
SHOWING 1-10 OF 103 REFERENCES
An abstract domain for certifying neural networks
- Computer ScienceProc. ACM Program. Lang.
- 2019
This work proposes a new abstract domain which combines floating point polyhedra with intervals and is equipped with abstract transformers specifically tailored to the setting of neural networks, and introduces new transformers for affine transforms, the rectified linear unit, sigmoid, tanh, and maxpool functions.
Adversarial Attacks on Neural Network Policies
- Computer ScienceICLR
- 2017
This work shows existing adversarial example crafting techniques can be used to significantly degrade test-time performance of trained policies, even with small adversarial perturbations that do not interfere with human perception.
Adversarial examples in the physical world
- Computer ScienceICLR
- 2017
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.
Constrained Reinforcement Learning for Robotics via Scenario-Based Programming
- Computer ScienceArXiv
- 2022
This paper presents a novel technique for incorporating domain-expert knowledge into a constrained DRL training loop that exploits the scenario-based programming paradigm, which is designed to allow specifying such knowledge in a simple and intuitive way.
Neural Network Verification with Proof Production
- Computer Science2022 Formal Methods in Computer-Aided Design (FMCAD)
- 2022
This work presents a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to-check witness of unsatisfiability, which attests to the absence of errors.
Verifying Learning-Based Robotic Navigation Systems
- Computer ScienceArXiv
- 2022
This work is the first to demonstrate the use of DNN verification backends for recognizing suboptimal DRL policies in real-world robots, and for filtering out unwanted policies.
Efficient Neural Network Analysis with Sum-of-Infeasibilities
- Computer ScienceTACAS
- 2022
It is demonstrated that SoI significantly improves the performance of an existing complete search procedure and can improve upon the perturbation bound derived by a recent adversarial attack algorithm.
An Abstraction-Refinement Approach to Verifying Convolutional Neural Networks
- Computer ScienceATVA
- 2022
The core of Cnn-Abs is an abstraction-refinement technique, which simplifies the verification problem through the removal of convolutional connections in a way that soundly creates an over-approximation of the original problem; and which restores these connections if the resulting problem becomes too abstract.
Diversity Matters When Learning From Ensembles
- Computer Science, Environmental ScienceNeurIPS
- 2021
A perturbation strategy for distillation is proposed that reveals diversity by seeking inputs for which ensemble member outputs disagree, leading to improved performance of a model distilled with such perturbed samples.
Minimal Multi-Layer Modifications of Deep Neural Networks
- Computer ScienceNSV/FoMLAS@CAV
- 2022
The novel repair procedure implemented in 3M-DNN computes a modification to the network’s weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend, black-box DNN verification engine.