Exploiting Verified Neural Networks via Floating Point Numerical Error

@article{Jia2021ExploitingVN,
  title={Exploiting Verified Neural Networks via Floating Point Numerical Error},
  author={Kai Jia and Martin C. Rinard},
  journal={ArXiv},
  year={2021},
  volume={abs/2003.03021}
}
We show how to construct adversarial examples for neural networks with exactly verified robustness against $\ell_{\infty}$-bounded input perturbations by exploiting floating point error. We argue that any exact verification of real-valued neural networks must accurately model the implementation details of any floating point arithmetic used during inference or verification. 
Neural Network Robustness Verification on GPUs
TLDR
GPUPoly is a scalable verifier that can prove the robustness of significantly larger deep neural networks than possible with prior work, and is believed to be a promising step towards the practical verification of large real-world networks.
Scaling Polyhedral Neural Network Verification on GPUs
TLDR
GPUPoly is a scalable verifier that can prove the robustness of significantly larger deep neural networks than previously possible, and is believed to be a promising step towards practical verification of real-world neural networks.
Efficient Exact Verification of Binarized Neural Networks
TLDR
Compared to exact verification of real-valued networks of the same architectures on the same tasks, EEV verifies BNNs hundreds to thousands of times faster, while delivering comparable verifiable accuracy in most cases.
Scalable Verification of Quantized Neural Networks (Technical Report)
TLDR
This paper shows that verifying the bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard, even though verifying idealized real-valued networks and satisfiability of bit- vector specifications alone are each in NP, and explores several practical heuristics toward closing the complexity gap between idealized and bit-Exact verification.
Efficient Exact Verification of Binarized Neural Networks
TLDR
The effectiveness of EEV is demonstrated by presenting the first exact verification results for $\ell_{\infty}$-bounded adversarial robustness of nontrivial convolutional BNNs on the MNIST and CIFAR10 datasets.
An Interval Compiler for Sound Floating-Point Computations
TLDR
IGen, a source-to-source compiler that translates a given C function using floating-point into an equivalent sound C function that uses interval arithmetic is presented, showing that the generated code delivers sound double precision results at high performance.
Characterizing and Taming Model Instability Across Edge Devices
TLDR
This paper presents the first methodical characterization of the variations in model prediction across real-world mobile devices, and introduces a new metric, instability, which captures this variation.
CheckINN: Wide Range Neural Network Verification in Imandra
TLDR
Imandra, a functional programming language and a theorem prover originally designed for verification, validation and simulation of financial infrastructure can offer a holistic infrastructure for neural network verification.
Sound Randomized Smoothing in Floating-Point Arithmetics
TLDR
This work proposes a sound approach to randomized smoothing when using floating-point precision with essentially equal speed and matching the certificates of the standard, unsound practice for standard classi⬁ers tested so far.
Neural Network Verification with Proof Production
TLDR
This work presents a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-tocheck witness of unsatisfiability, which attests to the absence of errors.
...
...

References

SHOWING 1-10 OF 60 REFERENCES
Evaluating Robustness of Neural Networks with Mixed Integer Programming
TLDR
Verification of piecewise-linear neural networks as a mixed integer program that is able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack for every network.
Differentiable Abstract Interpretation for Provably Robust Neural Networks
TLDR
Several abstract transformers which balance efficiency with precision are presented and it is shown these can be used to train large neural networks that are certifiably robust to adversarial perturbations.
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability
TLDR
It is demonstrated that improving weight sparsity alone already enables us to turn computationally intractable verification problems into tractable ones and improving ReLU stability leads to an additional 4-13x speedup in verification times.
Verifying Properties of Binarized Deep Neural Networks
TLDR
This paper proposes a rigorous way of verifying properties of a popular class of neural networks, Binarized Neural Networks, using the well-developed means of Boolean satisfiability, and creates a construction that creates a representation of a binarized neural network as a Boolean formula.
Training verified learners with learned verifiers
TLDR
Experiments show that the predictor-verifier architecture able to train networks to achieve state of the art verified robustness to adversarial examples with much shorter training times can be scaled to produce the first known verifiably robust networks for CIFAR-10.
Safety Verification of Deep Neural Networks
TLDR
A novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT) is developed, which defines safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image.
Provable defenses against adversarial examples via the convex outer adversarial polytope
TLDR
A method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations, and it is shown that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss.
Semidefinite relaxations for certifying robustness to adversarial examples
TLDR
A new semidefinite relaxation for certifying robustness that applies to arbitrary ReLU networks is proposed and it is shown that this proposed relaxation is tighter than previous relaxations and produces meaningful robustness guarantees on three different foreign networks whose training objectives are agnostic to the proposed relaxation.
Efficient Neural Network Robustness Certification with General Activation Functions
TLDR
This paper introduces CROWN, a general framework to certify robustness of neural networks with general activation functions for given input data points and facilitates the search for a tighter certified lower bound by adaptively selecting appropriate surrogates for each neuron activation.
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
...
...