• Corpus ID: 229340613

Incremental Verification of Fixed-Point Implementations of Neural Networks

@article{Sena2020IncrementalVO,
  title={Incremental Verification of Fixed-Point Implementations of Neural Networks},
  author={Luiz Sena and Erickson H. da S. Alves and Iury V. Bessa and Eddie Batista de Lima Filho and Lucas C. Cordeiro},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.11220}
}
Implementations of artificial neural networks (ANNs) might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are barely interpretable. Here, we develop and evaluate a novel symbolic verification framework using incremental bounded model checking (BMC), satisfiability modulo theories (SMT), and invariant inference, to obtain adversarial cases and validate coverage methods in a multi-layer perceptron (MLP). We exploit incremental… 

References

SHOWING 1-10 OF 50 REFERENCES

Incremental Bounded Model Checking of Artificial Neural Networks in CUDA

This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA and experimental results show that the approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.

Symbolic Execution for Deep Neural Networks

DeepCheck is introduced, a new approach for validating DNNs based on core ideas from program analysis, specifically from symbolic execution to translate a DNN into an imperative program, thereby enabling program analysis to assist with DNN validation.

Efficient Formal Safety Analysis of Neural Networks

This paper presents a new efficient approach for rigorously checking different safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude and believes that this approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of Neural networks and guide the training process of more robust neural networks.

Robustness of Neural Networks to Parameter Quantization

This work presents a framework based on Satisfiability Modulo Theory (SMT) solvers to quantify the robustness of neural networks to parameter perturbation, and shows that Rectified Linear Unit activation results in higher robustness than linear activations for the authors' MLPs.

TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing

This work develops coverage-guided fuzzing methods for neural networks that are well-suited to discovering errors which occur only for rare inputs, and describes how fast approximate nearest neighbor algorithms can provide this coverage metric.

DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks

A novel approach for automatically identifying safe regions of the input space, within which the network is robust against adversarial perturbations, relying on clustering to identify well-defined geometric regions as candidate safe regions and introducing the notion of targeted robustness which ensures that a NN does not map any input in the region to the target label.

Finding Invariants in Deep Neural Networks

This work presents techniques for automatically inferring invariant properties of feed-forward neural networks, and presents techniques to extract input invariants as convex predicates on the input space, and layer invariants that represent features captured in the hidden layers.

Safety Verification of Deep Neural Networks

A novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT) is developed, which defines safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image.

Convergence Analysis of Two-layer Neural Networks with ReLU Activation

A convergence analysis for SGD is provided on a rich subset of two-layer feedforward networks with ReLU activations characterized by a special structure called "identity mapping" that proves that, if input follows from Gaussian distribution, with standard $O(1/\sqrt{d})$ initialization of the weights, SGD converges to the global minimum in polynomial number of steps.