Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

@article{Katz2017ReluplexAE,
  title={Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks},
  author={Guy Katz and Clark W. Barrett and David L. Dill and Kyle D. Julian and Mykel J. Kochenderfer},
  journal={ArXiv},
  year={2017},
  volume={abs/1702.01135}
}
Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. [] Key Method The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the…
Reluplex: a calculus for reasoning about deep neural networks
TLDR
A novel, scalable, and efficient technique based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks.
Dynamic and Scalable Deep Neural Network Verification Algorithm
TLDR
The proposed technique is based on the linearization of the non-convex Rectified Linear Unit (ReLU) activation function using the Big-M optimization approach and contributes by an iterative process to find the largest input range verifying (and then defining) the neural network proprieties of neural networks.
Verification of Neural Nets : Towards Deterministic Guarantees
TLDR
The authors present an extension of the Simplex method to formally verify or falsify properties of Rectified Linear Unit (ReLU) based networks and used it to verify some interesting properties of a prototype deep neural network implementation of the next-generation Airborne Collision Avoidance System for unmanned aircraft.
Improved Geometric Path Enumeration for Verifying ReLU Neural Networks
TLDR
This paper works to address the runtime problem by improving upon a recently-proposed geometric path enumeration method, and demonstrates significant speed improvement of exact analysis on the well-studied ACAS Xu benchmarks, sometimes hundreds of times faster than the original implementation.
An abstract domain for certifying neural networks
TLDR
This work proposes a new abstract domain which combines floating point polyhedra with intervals and is equipped with abstract transformers specifically tailored to the setting of neural networks, and introduces new transformers for affine transforms, the rectified linear unit, sigmoid, tanh, and maxpool functions.
Neuro-Symbolic Verification of Deep Neural Networks
TLDR
This work introduces a novel framework for verifying neural networks, named neuro-symbolic verification, which uses neural networks as part of the otherwise logical specification, enabling the verification of a wide variety of complex, real-world properties, including the one above.
A Mixed Integer Programming Approach for Verifying Properties of Binarized Neural Networks
TLDR
A simple mixed integer programming formulation for BNN verification that leverages network structure is proposed and the results suggest that the difficulty of training BNNs might be worth the reduction in runtime achieved by the verification algorithm.
Precise Multi-Neuron Abstractions for Neural Network Certification
TLDR
The results show that PRIMA is significantly more precise than the state-of-the-art, verifying robustness for up to 16, 30%, and 34% more images than prior work on ReLU-, Sigmoid-, and Tanh-based networks, respectively.
Advances in verification of ReLU neural networks
TLDR
A solver for verification of ReLU neural networks which combines mixed integer programming with specialized branching and approximation techniques is implemented and able to solve the verification problem for instances which do not have independent bounds for each input neuron.
Techniques for Verifying Robustness of Neural Networks
TLDR
A survey of three recently proposed techniques for certifying robustness of neural networks that use the Rectified Linear Unit (ReLU) activation function: an SMT based approach, an optimization based approach that uses Semi-Definite Programming (SDP) relaxation and an approach that applies abstract interpretation to neural networks.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 42 REFERENCES
Challenging SMT solvers to verify neural networks
TLDR
An extensive evaluation of the current state-of-the-art SMT solvers is presented and their potential in the promising domain of MLP verification is assessed.
An Abstraction-Refinement Approach to Verification of Artificial Neural Networks
TLDR
A solution to verify their safety using abstractions to Boolean combinations of linear arithmetic constraints, and it is shown that whenever the abstract MLP is declared to be safe, the same holds for the concrete one.
Safety Verification of Deep Neural Networks
TLDR
A novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT) is developed, which defines safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image.
Intriguing properties of neural networks
TLDR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
Leveraging linear and mixed integer programming for SMT
TLDR
This paper describes a technique for integrating LP solvers that improves the performance of SMT solvers without compromising correctness, and demonstrates that this implementation outperforms other state-of-the-art SMTsolvers on the QF_LRA SMT-LIB benchmarks and is competitive on theQF-LIA benchmarks.
Policy compression for aircraft collision avoidance systems
TLDR
A deep neural network is used to learn a complex non-linear function approximation of the lookup table, which reduces the required storage space by a factor of 1000 and surpasses the original table on the performance metrics and encounter sets evaluated here.
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Measuring Neural Net Robustness with Constraints
TLDR
This work proposes metrics for measuring the robustness of a neural net and devise a novel algorithm for approximating these metrics based on an encoding of robustness as a linear program and generates more informative estimates of robusts metrics compared to estimates based on existing algorithms.
Mastering the game of Go with deep neural networks and tree search
TLDR
Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
...
1
2
3
4
5
...