Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification

@inproceedings{Yang2019AnalyzingDN,
  title={Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification},
  author={Pengfei Yang and Jiangchao Liu and Jianlin Li and Liqian Chen and Xiaowei Huang},
  booktitle={SAS},
  year={2019}
}
Deep neural networks (DNNs) have been shown lack of robustness for the vulnerability of their classification to small perturbations on the inputs. This has led to safety concerns of applying DNNs to safety-critical domains. Several verification approaches have been developed to automatically prove or disprove safety properties of DNNs. However, these approaches suffer from either the scalability problem, i.e., only small DNNs can be handled, or the precision problem, i.e., the obtained bounds… 

Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation

TLDR
A variety of local robustness properties and a -global robustness property of DNNs are focused on, and novel strategies to combine the constraint solving and abstraction-based approaches to work with these properties are investigated.

Improving Neural Network Verification through Spurious Region Guided Refinement

TLDR
Experimental results show that a large amount of regions can be identified as spurious, and as a result, the precision of DeepPoly can be significantly improved and can be applied to verify quantitative robustness properties.

PRODeep: a platform for robustness verification of deep neural networks

TLDR
PRODeep is presented, a platform for robustness verification of DNNs that incorporates constraint-based, abstraction- based, and optimisation-based robustness checking algorithms and has a modular architecture, enabling easy comparison of different algorithms.

Precise Quantitative Analysis of Binarized Neural Networks: A BDD-based Approach

TLDR
A novel algorithmic approach for encoding BNNs as Binary Decision Diagrams (BDDs), a widely studied model in formal verification and knowledge representation is proposed, which translates the input-output relation of blocks in Bnns to cardinality constraints which are then encoded by BDDs.

Detecting numerical bugs in neural network architectures

TLDR
This paper makes the first attempt to conduct static analysis for detecting numerical bugs at the architecture level with DEBAR, and evaluates it on two datasets: neural architectures with known bugs (collected from existing studies) and real-world neural architectures.

Coverage-Guided Testing for Recurrent Neural Networks

TLDR
Experiments confirm that testRNN has advantages over the state-of-the-art tool DeepStellar and attack-based defect detection methods, owing to its working with finer temporal semantics and the consideration of the naturalness of input perturbation.

Customizable Reference Runtime Monitoring of Neural Networks using Resolution Boxes

TLDR
This work presents an approach for the runtime verification of classification systems via data abstraction, and shows how to automatically construct monitors that make use of both the correct and incorrect behaviors of a classification system.

Deep Statistical Model Checking

TLDR
A family of formal models that contain basic features of automated decision making contexts and which can be extended with further orthogonal features, ultimately encompassing the scope of autonomous driving is presented.

Tutorials on Testing Neural Networks

TLDR
This tutorial is to go through the major functionalities of the tools with a few running examples, to exhibit how the developed techniques work, what the results are, and how to interpret them.

BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized Neural Networks

TLDR
A quantitative framework for Binarized Neural Networks, the 1-bit quantization of general real-numbered neural networks, is developed where precise and comprehensive analysis of BNNs can be performed and is demonstrated by providing quantitative robustness analysis and interpretability.

References

SHOWING 1-10 OF 35 REFERENCES

Formal Security Analysis of Neural Networks using Symbolic Intervals

TLDR
This paper designs, implements, and evaluates a new direction for formally checking security properties of DNNs without using SMT solvers, and leverages interval arithmetic to compute rigorous bounds on the DNN outputs, which is easily parallelizable.

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation

TLDR
This work presents AI2, the first sound and scalable analyzer for deep neural networks, and introduces abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling layers.

A Dual Approach to Scalable Verification of Deep Networks

TLDR
This paper addresses the problem of formally verifying desirable properties of neural networks by forming verification as an optimization problem and solving a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified.

Verifying Properties of Binarized Deep Neural Networks

TLDR
This paper proposes a rigorous way of verifying properties of a popular class of neural networks, Binarized Neural Networks, using the well-developed means of Boolean satisfiability, and creates a construction that creates a representation of a binarized neural network as a Boolean formula.

Reachability Analysis of Deep Neural Networks with Provable Guarantees

TLDR
A novel algorithm based on adaptive nested optimisation to solve the reachability problem for feed-forward DNNs is presented, demonstrating its efficiency, scalability and ability to handle a broader class of networks than state-of-the-art verification approaches.

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

TLDR
Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.

Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks

TLDR
An approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function and infers additional node phases for the non-linear nodes in the network from partial node phase assignments, similar to unit propagation in classical SAT solving.

An Abstraction-Refinement Approach to Verification of Artificial Neural Networks

TLDR
A solution to verify their safety using abstractions to Boolean combinations of linear arithmetic constraints, and it is shown that whenever the abstract MLP is declared to be safe, the same holds for the concrete one.

Fast and Effective Robustness Certification

TLDR
A new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation that handles ReLU, Tanh and Sigmoid activation functions, is significantly more scalable and precise, and is sound with respect to floating point arithmetic.

An abstract domain for certifying neural networks

TLDR
This work proposes a new abstract domain which combines floating point polyhedra with intervals and is equipped with abstract transformers specifically tailored to the setting of neural networks, and introduces new transformers for affine transforms, the rectified linear unit, sigmoid, tanh, and maxpool functions.