• Corpus ID: 229340613

# Incremental Verification of Fixed-Point Implementations of Neural Networks

@article{Sena2020IncrementalVO,
title={Incremental Verification of Fixed-Point Implementations of Neural Networks},
author={Luiz Sena and Erickson H. da S. Alves and Iury V. Bessa and Eddie Batista de Lima Filho and Lucas C. Cordeiro},
journal={ArXiv},
year={2020},
volume={abs/2012.11220}
}
• Published 21 December 2020
• Computer Science
• ArXiv
Implementations of artificial neural networks (ANNs) might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are barely interpretable. Here, we develop and evaluate a novel symbolic verification framework using incremental bounded model checking (BMC), satisfiability modulo theories (SMT), and invariant inference, to obtain adversarial cases and validate coverage methods in a multi-layer perceptron (MLP). We exploit incremental…

## References

SHOWING 1-10 OF 50 REFERENCES

• Computer Science
2019 IX Brazilian Symposium on Computing Systems Engineering (SBESC)
• 2019
This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA and experimental results show that the approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.
• Computer Science
CAV
• 2017
Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.
• Computer Science
ArXiv
• 2018
DeepCheck is introduced, a new approach for validating DNNs based on core ideas from program analysis, specifically from symbolic execution to translate a DNN into an imperative program, thereby enabling program analysis to assist with DNN validation.
• Computer Science
NeurIPS
• 2018
This paper presents a new efficient approach for rigorously checking different safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude and believes that this approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of Neural networks and guide the training process of more robust neural networks.
• Computer Science
From Reactive Systems to Cyber-Physical Systems
• 2019
This work presents a framework based on Satisfiability Modulo Theory (SMT) solvers to quantify the robustness of neural networks to parameter perturbation, and shows that Rectified Linear Unit activation results in higher robustness than linear activations for the authors' MLPs.
• Computer Science
ICML
• 2019
This work develops coverage-guided fuzzing methods for neural networks that are well-suited to discovering errors which occur only for rare inputs, and describes how fast approximate nearest neighbor algorithms can provide this coverage metric.
• Computer Science
ArXiv
• 2017
A novel approach for automatically identifying safe regions of the input space, within which the network is robust against adversarial perturbations, relying on clustering to identify well-defined geometric regions as candidate safe regions and introducing the notion of targeted robustness which ensures that a NN does not map any input in the region to the target label.
• Computer Science
ArXiv
• 2019
This work presents techniques for automatically inferring invariant properties of feed-forward neural networks, and presents techniques to extract input invariants as convex predicates on the input space, and layer invariants that represent features captured in the hidden layers.
• Computer Science
CAV
• 2017
A novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT) is developed, which defines safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image.
• Computer Science
NIPS
• 2017
A convergence analysis for SGD is provided on a rich subset of two-layer feedforward networks with ReLU activations characterized by a special structure called "identity mapping" that proves that, if input follows from Gaussian distribution, with standard $O(1/\sqrt{d})$ initialization of the weights, SGD converges to the global minimum in polynomial number of steps.