NEUROSPF: A Tool for the Symbolic Analysis of Neural Networks

@article{Usman2021NEUROSPFAT,
  title={NEUROSPF: A Tool for the Symbolic Analysis of Neural Networks},
  author={Muhammad Usman and Yannic Noller and Corina S. Pasareanu and Youcheng Sun and Divya Gopinath},
  journal={2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)},
  year={2021},
  pages={25-28}
}
  • Muhammad Usman, Yannic Noller, +2 authors D. Gopinath
  • Published 22 January 2021
  • Computer Science
  • 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)
This paper presents NEUROSPF, a tool for the symbolic analysis of neural networks. Given a trained neural network model, the tool extracts the architecture and model parameters and translates them into a Java representation that is amenable for analysis using the Symbolic PathFinder symbolic execution tool. Notably, NEUROSPF encodes specialized peer classes for parsing the model's parameters, thereby enabling efficient analysis. With NEUROSPF the user has the flexibility to specify either the… 

Figures and Tables from this paper

Coverage-Guided Testing for Recurrent Neural Networks
TLDR
A coverage-guided testing approach for a major class of RNNs -- long short-term memory networks (LSTMs) and the first time structural coverage metrics are used to test LSTMs, which is an important step towards interpretable neural network testing.
NNrepair: Constraint-based Repair of Neural Network Classifiers
TLDR
Nrepair, a constraint-based technique for repairing neural network classifiers that first uses fault localization to find potentially faulty network parameters and then performs repair using constraint solving to apply small modifications to the parameters to remedy the defects.

References

SHOWING 1-10 OF 23 REFERENCES
Symbolic Execution for Importance Analysis and Adversarial Generation in Neural Networks
TLDR
DeepCheck implements novel techniques for lightweight symbolic analysis of DNNs and applies them to address two challenging problems in DNN analysis: identification of important input features and leveraging those features to create adversarial inputs.
Symbolic Execution for Attribution and Attack Synthesis in Neural Networks
TLDR
DeepCheck implements techniques for lightweight symbolic analysis of DNNs and applies them in the context of image classification to address two challenging problems: identification of important pixels for attribution and adversarial generation.
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
TLDR
Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.
DeepFault: Fault Localization for Deep Neural Networks
TLDR
The DeepFault whitebox DNN testing approach presented in this paper addresses this challenge by employing suspiciousness measures inspired by fault localization to establish the hit spectrum of neurons and identify suspicious neurons whose weights have not been calibrated correctly and thus are considered responsible for inadequate DNN performance.
Structural Test Coverage Criteria for Deep Neural Networks
TLDR
This paper proposes four novel test criteria that are tailored to structural features of DNNs and their semantics, and validate the criteria by demonstrating that the generated test inputs, guided by the coverage criteria, are able to capture undesirable behaviours in DNN's.
An Abstraction-Refinement Approach to Verification of Artificial Neural Networks
TLDR
A solution to verify their safety using abstractions to Boolean combinations of linear arithmetic constraints, and it is shown that whenever the abstract MLP is declared to be safe, the same holds for the concrete one.
MODE: automated neural network model debugging via state differential analysis and input selection
TLDR
This work proposes a novel model debugging technique that works by first conducting model state differential analysis to identify the internal features of the model that are responsible for model bugs and then performing training input selection that is similar to program input selection in regression testing.
Guiding Deep Learning System Testing Using Surprise Adequacy
  • Jinhan Kim, R. Feldt, S. Yoo
  • Computer Science
    2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)
  • 2019
TLDR
A novel test adequacy criterion is proposed, called Surprise Adequacy for Deep Learning Systems (SADL), which is based on the behaviour of DL systems with respect to their training data, and shows that systematic sampling of inputs based on their surprise can improve classification accuracy ofDL systems against adversarial examples by up to 77.5% via retraining.
Safety Verification of Deep Neural Networks
TLDR
A novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT) is developed, which defines safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image.
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems
  • L. Ma, Felix Juefei-Xu, +9 authors Yadong Wang
  • Computer Science, Mathematics
    2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE)
  • 2018
TLDR
DeepGauge is proposed, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed and sheds light on the construction of more generic and robust DL systems.
...
1
2
3
...