ReluDiff: Differential Verification of Deep Neural Networks

@article{Paulsen2020ReluDiffDV,
  title={ReluDiff: Differential Verification of Deep Neural Networks},
  author={Brandon Paulsen and Jingbo Wang and Chao Wang},
  journal={2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE)},
  year={2020},
  pages={714-726}
}
As deep neural networks are increasingly being deployed in practice, their efficiency has become an important issue. While there are compression techniques for reducing the network's size, energy consumption and computational requirement, they only demonstrate empirically that there is no loss of accuracy, but lack formal guarantees of the compressed network, e.g., in the presence of adversarial examples. Existing verification techniques such as Reluplex, ReluVal, and DeepPoly provide formal… 
DiffRNN: Differential Verification of Recurrent Neural Networks
TLDR
DIFFRNN is proposed, the first differential verification method for RNNs to certify the equivalence of two structurally similar neural networks and demonstrates the practical efficacy of the technique on a variety of benchmarks and shows that it outperforms state-of-the-art RNN verification tools such as POPQORN.
Proof transfer for fast certification of multiple approximate neural networks
TLDR
FANC is presented, the first general technique for transferring proofs between a given network and its multiple approximate versions without compromising verifier precision, and results indicate that FANC can significantly speed up verification with state-of-the-art verifier, DeepZ by up to 4.1x.
NEURODIFF: Scalable Differential Verification of Neural Networks using Fine-Grained Approximation
TLDR
NEURODIFF is a symbolic and fine-grained approximation technique that drastically increases the accuracy of differential verification on feed-forward ReLU networks while achieving many orders-of-magnitude speedup and judicious use of symbolic variables to represent neurons whose difference bounds have accumulated significant error.
SoK: Certified Robustness for Deep Neural Networks
TLDR
This paper provides a taxonomy for the robustness verification and training approaches, and provides an open-sourced unified platform to evaluate 20+ representative verification and corresponding robust training approaches on a wide range of DNNs.
On Neural Network Equivalence Checking using SMT Solvers
TLDR
This work presents a first SMT-based encoding of the equivalence checking problem, explores its utility and limitations and proposes avenues for future research and improvements towards more scalable and practically applicable solutions.
ZoPE: A Fast Optimizer for ReLU Networks with Low-Dimensional Inputs
TLDR
An algorithm called ZoPE is presented that solves optimization problems over the output of feedforward ReLU networks with low-dimensional inputs and demonstrates the versatility of the optimizer in analyzing networks by projecting onto the range of a generative adversarial network and visualizing the differences between a compressed and uncompressed network.
Geometric Path Enumeration for Equivalence Verification of Neural Networks
TLDR
This work focuses on the formal verification problem of NN equivalence which aims to prove that two NNs show equivalent behavior, and extends Tran et al.
Shared Certificates for Neural Network Verification
TLDR
A new method for reducing verification cost based on the key insight that convex sets obtained at intermediate layers can overlap across different inputs and perturbations is introduced.
Towards Practical Robustness Analysis for DNNs based on PAC-Model Learning
TLDR
The innovation of this work is the integration of model learning into PAC robustness analysis: that is, it constructs a PAC guarantee on the model level instead of sample distribution, which induces a more faithful and accurate robustness evaluation.
LinSyn: Synthesizing Tight Linear Bounds for Arbitrary Neural Network Activation Functions
TLDR
This work proposes the first fully automated method that achieves tight linear bounds while only leveraging the mathematical definition of the activation function itself, and leverages an efficient heuristic technique to synthesize bounds that are tight and usually sound.
...
...

References

SHOWING 1-10 OF 56 REFERENCES
A Dual Approach to Scalable Verification of Deep Networks
TLDR
This paper addresses the problem of formally verifying desirable properties of neural networks by forming verification as an optimization problem and solving a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified.
Towards Compact and Robust Deep Neural Networks
TLDR
This work proposes a new pruning method that can create compact networks while preserving both benign accuracy and robustness of a network and ensures that the training objectives of the pre-training and fine-tuning steps match the training objective of the desired robust model.
Safety Verification of Deep Neural Networks
TLDR
A novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT) is developed, which defines safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image.
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
TLDR
Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.
Towards Fast Computation of Certified Robustness for ReLU Networks
TLDR
It is shown that, in fact, there is no polynomial time algorithm that can approximately find the minimum adversarial distortion of a ReLU network with a $0.99\ln n$ approximation ratio unless $\mathsf{NP}$=$\ mathsf{P}$, where $n$ is the number of neurons in the network.
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
TLDR
A feature-guided black-box approach to test the safety of deep neural networks that requires no knowledge of the network at hand and can be used to evaluate robustness of neural networks in safety-critical applications such as traffic sign recognition in self-driving cars.
Formal Security Analysis of Neural Networks using Symbolic Intervals
TLDR
This paper designs, implements, and evaluates a new direction for formally checking security properties of DNNs without using SMT solvers, and leverages interval arithmetic to compute rigorous bounds on the DNN outputs, which is easily parallelizable.
An abstract domain for certifying neural networks
TLDR
This work proposes a new abstract domain which combines floating point polyhedra with intervals and is equipped with abstract transformers specifically tailored to the setting of neural networks, and introduces new transformers for affine transforms, the rectified linear unit, sigmoid, tanh, and maxpool functions.
Provable defenses against adversarial examples via the convex outer adversarial polytope
TLDR
A method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations, and it is shown that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss.
...
...