Introduction to Neural Network Verification

@article{Albarghouthi2021IntroductionTN,
  title={Introduction to Neural Network Verification},
  author={Aws Albarghouthi},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.10317}
}
Deep learning has transformed the way we think of software and what it can do. But deep neural networks are fragile and their behaviors are often surprising. In many settings, we need to provide formal guarantees on the safety, security, correctness, or robustness of neural networks. This book covers foundational ideas from formal verification and their adaptation to reasoning about neural networks and deep learning. 
Scalable and Modular Robustness Analysis of Deep Neural Networks
TLDR
Bounded-Block Poly can analyze really large neural networks like SkipNet or ResNet that contain up to one million neurons in less than around 1 hour per input image, while DeepPoly needs to spend even 40 hours to analyze one image. Expand
Interval Universal Approximation for Neural Networks
TLDR
1-dimensional indicator approximation, t̂ , which was constructed earlier, can be used to tell us, for each dimension j, whether x j is within the bounds of the neighborhood of G, and how to construct an indicator function approximation NG for anm-dimensional box. Expand
Certifying Robustness to Programmable Data Bias in Decision Trees
TLDR
The goal is to certify that models produced by a learning algorithm are pointwise-robust to potential dataset biases, and a novel symbolic technique is used to evaluate a decision-tree learner on a large, or infinite, number of datasets, ensuring that they all produce the same prediction. Expand