Corpus ID: 51863618

Symbolic Execution for Deep Neural Networks

@article{Gopinath2018SymbolicEF,
  title={Symbolic Execution for Deep Neural Networks},
  author={D. Gopinath and Kaiyuan Wang and Mengshi Zhang and C. Pasareanu and S. Khurshid},
  journal={ArXiv},
  year={2018},
  volume={abs/1807.10439}
}
  • D. Gopinath, Kaiyuan Wang, +2 authors S. Khurshid
  • Published 2018
  • Computer Science
  • ArXiv
  • Deep Neural Networks (DNN) are increasingly used in a variety of applications, many of them with substantial safety and security concerns. This paper introduces DeepCheck, a new approach for validating DNNs based on core ideas from program analysis, specifically from symbolic execution. The idea is to translate a DNN into an imperative program, thereby enabling program analysis to assist with DNN validation. A basic translation however creates programs that are very complex to analyze… CONTINUE READING
    Symbolic Execution for Attribution and Attack Synthesis in Neural Networks
    • 4
    • PDF
    Testing Deep Neural Networks
    • 88
    • PDF
    DeepFault: Fault Localization for Deep Neural Networks
    • 9
    • PDF
    Dynamic Slicing for Deep Neural Networks
    Incremental Bounded Model Checking of Artificial Neural Networks in CUDA
    Importance-driven deep learning system testing
    • 5
    • PDF
    A system-level perspective to understand the vulnerability of deep learning systems
    • 2
    Bringing Engineering Rigor to Deep Learning
    • 1
    • PDF
    DeepSmartFuzzer: Reward Guided Test Generation For Deep Learning
    • 1
    • PDF
    DeepSearch: Simple and Effective Blackbox Fuzzing of Deep Neural Networks
    • 4
    • PDF

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 32 REFERENCES
    Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
    • 664
    • PDF
    DeepXplore: Automated Whitebox Testing of Deep Learning Systems
    • 277
    • PDF
    Concolic Testing for Deep Neural Networks
    • 133
    • PDF
    Towards Evaluating the Robustness of Neural Networks
    • 2,610
    • PDF
    DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks
    • 43
    • PDF
    The Limitations of Deep Learning in Adversarial Settings
    • 1,679
    • PDF
    Safety Verification of Deep Neural Networks
    • 420
    • Highly Influential
    • PDF
    Explaining and Harnessing Adversarial Examples
    • 5,589
    • PDF
    DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
    • 1,760
    • PDF
    Intriguing properties of neural networks
    • 5,304
    • PDF