# Exploiting Verified Neural Networks via Floating Point Numerical Error

@article{Jia2021ExploitingVN, title={Exploiting Verified Neural Networks via Floating Point Numerical Error}, author={Kai Jia and Martin C. Rinard}, journal={ArXiv}, year={2021}, volume={abs/2003.03021} }

We show how to construct adversarial examples for neural networks with exactly verified robustness against $\ell_{\infty}$-bounded input perturbations by exploiting floating point error. We argue that any exact verification of real-valued neural networks must accurately model the implementation details of any floating point arithmetic used during inference or verification.

## 21 Citations

### Neural Network Robustness Verification on GPUs

- Computer ScienceArXiv
- 2020

GPUPoly is a scalable verifier that can prove the robustness of significantly larger deep neural networks than possible with prior work, and is believed to be a promising step towards the practical verification of large real-world networks.

### Scaling Polyhedral Neural Network Verification on GPUs

- Computer Science
- 2020

GPUPoly is a scalable verifier that can prove the robustness of significantly larger deep neural networks than previously possible, and is believed to be a promising step towards practical verification of real-world neural networks.

### Efficient Exact Verification of Binarized Neural Networks

- Computer Science
- 2020

Compared to exact verification of real-valued networks of the same architectures on the same tasks, EEV verifies BNNs hundreds to thousands of times faster, while delivering comparable verifiable accuracy in most cases.

### Scalable Verification of Quantized Neural Networks (Technical Report)

- Computer ScienceArXiv
- 2020

This paper shows that verifying the bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard, even though verifying idealized real-valued networks and satisfiability of bit- vector specifications alone are each in NP, and explores several practical heuristics toward closing the complexity gap between idealized and bit-Exact verification.

### Efficient Exact Verification of Binarized Neural Networks

- Computer ScienceNeurIPS
- 2020

The effectiveness of EEV is demonstrated by presenting the first exact verification results for $\ell_{\infty}$-bounded adversarial robustness of nontrivial convolutional BNNs on the MNIST and CIFAR10 datasets.

### An Interval Compiler for Sound Floating-Point Computations

- Computer Science2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)
- 2021

IGen, a source-to-source compiler that translates a given C function using floating-point into an equivalent sound C function that uses interval arithmetic is presented, showing that the generated code delivers sound double precision results at high performance.

### Characterizing and Taming Model Instability Across Edge Devices

- Computer ScienceMLSys
- 2021

This paper presents the first methodical characterization of the variations in model prediction across real-world mobile devices, and introduces a new metric, instability, which captures this variation.

### Truth-Table Net: A New Convolutional Architecture Encodable By Design Into SAT Formulas

- Computer Science
- 2022

This work introduces Truth Table Deep Convolutional Neural Networks (TTnets), a new family of SAT-encodable models featuring for the first time real-valued weights and postulates that TTnets can apply to various CNN-based architectures and be extended to other prop- erties such as fairness, fault attack and exact rule extraction.

### CheckINN: Wide Range Neural Network Verification in Imandra

- Computer SciencePPDP
- 2022

Imandra, a functional programming language and a theorem prover originally designed for verification, validation and simulation of financial infrastructure can offer a holistic infrastructure for neural network verification.

### Sound Randomized Smoothing in Floating-Point Arithmetics

- Computer Science, MathematicsArXiv
- 2022

This work proposes a sound approach to randomized smoothing when using ﬂoating-point precision with essentially equal speed and matching the certiﬁcates of the standard, unsound practice for standard classi⬁ers tested so far.

## References

SHOWING 1-10 OF 56 REFERENCES

### Evaluating Robustness of Neural Networks with Mixed Integer Programming

- Computer ScienceICLR
- 2019

Verification of piecewise-linear neural networks as a mixed integer program that is able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack for every network.

### Differentiable Abstract Interpretation for Provably Robust Neural Networks

- Computer ScienceICML
- 2018

Several abstract transformers which balance efficiency with precision are presented and it is shown these can be used to train large neural networks that are certifiably robust to adversarial perturbations.

### Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability

- Computer ScienceICLR
- 2019

It is demonstrated that improving weight sparsity alone already enables us to turn computationally intractable verification problems into tractable ones and improving ReLU stability leads to an additional 4-13x speedup in verification times.

### Verifying Properties of Binarized Deep Neural Networks

- Computer ScienceAAAI
- 2018

This paper proposes a rigorous way of verifying properties of a popular class of neural networks, Binarized Neural Networks, using the well-developed means of Boolean satisfiability, and creates a construction that creates a representation of a binarized neural network as a Boolean formula.

### Training verified learners with learned verifiers

- Computer ScienceArXiv
- 2018

Experiments show that the predictor-verifier architecture able to train networks to achieve state of the art verified robustness to adversarial examples with much shorter training times can be scaled to produce the first known verifiably robust networks for CIFAR-10.

### Safety Verification of Deep Neural Networks

- Computer ScienceCAV
- 2017

A novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT) is developed, which defines safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image.

### Provable defenses against adversarial examples via the convex outer adversarial polytope

- Computer ScienceICML
- 2018

A method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations, and it is shown that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss.

### Semidefinite relaxations for certifying robustness to adversarial examples

- Computer ScienceNeurIPS
- 2018

A new semidefinite relaxation for certifying robustness that applies to arbitrary ReLU networks is proposed and it is shown that this proposed relaxation is tighter than previous relaxations and produces meaningful robustness guarantees on three different foreign networks whose training objectives are agnostic to the proposed relaxation.

### Efficient Neural Network Robustness Certification with General Activation Functions

- Computer ScienceNeurIPS
- 2018

This paper introduces CROWN, a general framework to certify robustness of neural networks with general activation functions for given input data points and facilitates the search for a tighter certified lower bound by adaptively selecting appropriate surrogates for each neuron activation.

### Towards Evaluating the Robustness of Neural Networks

- Computer Science2017 IEEE Symposium on Security and Privacy (SP)
- 2017

It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.