Introduction to Neural Network Verification

@article{Albarghouthi2021IntroductionTN,
  title={Introduction to Neural Network Verification},
  author={Aws Albarghouthi},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.10317}
}
Deep learning has transformed the way we think of software and what it can do. But deep neural networks are fragile and their behaviors are often surprising. In many settings, we need to provide formal guarantees on the safety, security, correctness, or robustness of neural networks. This book covers foundational ideas from formal verification and their adaptation to reasoning about neural networks and deep learning. 

Scalable and Modular Robustness Analysis of Deep Neural Networks

Bounded-Block Poly can analyze really large neural networks like SkipNet or ResNet that contain up to one million neurons in less than around 1 hour per input image, while DeepPoly needs to spend even 40 hours to analyze one image.

Neuro-Symbolic Verification of Deep Neural Networks

A novel framework for verifying neural networks, named neuro-symbolic verification, which uses neural networks as part of the otherwise logical specification, enabling the verification of a wide variety of complex, real-world properties, including the one above.

Interval universal approximation for neural networks

This paper introduces the interval universal approximation (IUA) theorem, and shows that the range approximation problem (RA) is a Δ2-intermediate problem, which is strictly harder than NP-complete problems, assuming coNP⊄NP.

Generative Adversarial Network and Its Application in Energy Internet

  • Zeqing Xiao
  • Computer Science
    Mathematical Problems in Engineering
  • 2022
The framework, advantages, disadvantages, and improvement of classic GAN are introduced, and the possible application of GAN in EI is prospected.

Synergistic Redundancy: Towards Verifiable Safety for Autonomous Vehicles

Synergistic Redundancy provides a safe architecture for the deployment of high-performance, although inherently unverifiable, machine learning software within its mission layer, and achieves predictable safety limits and deterministic safe behavior when within these limits.

Verifiable Obstacle Detection

This work establishes strict bounds on the capabilities of an existing LiDAR based classical obstacle detection algorithm and provides a rigorous analysis of the obstacle detection system with empirical results based on real-world sensor data.

CEG4N: Counter-Example Guided Neural Network Quantization Refinement

This work proposes Counter-Example Guided Neural Network Quantization Refinement (CEG4N), a technique that combines search-based quantization and equivalence verification that minimizes the computational requirements, while the latter guarantees that the network’s output does not change after quantization.

Double Sampling Randomized Smoothing

Theoretically, under mild assumptions, it is proved that DSRS can certify Θ( √ d ) robust radius under ℓ 2 norm where d is the input dimension, implying thatDSRS may be able to break the curse of dimensionality of randomized smoothing.

On Neural Network Equivalence Checking using SMT Solvers

This work presents a first SMT-based encoding of the equivalence checking problem, explores its utility and limitations and proposes avenues for future research and improvements towards more scalable and practically applicable solutions.

Safe Neurosymbolic Learning with Differentiable Symbolic Execution

Differentiable Symbolic Execution (D SE) learns programs by sampling code paths using symbolic execution, constructing gradients of a worst-case “safety loss” along these paths, and then backpropagating these gradients through program operations using a generalization of the R EINFORCE estimator.