Provably-Robust Runtime Monitoring of Neuron Activation Patterns

@article{Cheng2021ProvablyRobustRM,
  title={Provably-Robust Runtime Monitoring of Neuron Activation Patterns},
  author={Chih-Hong Cheng},
  journal={2021 Design, Automation \& Test in Europe Conference \& Exhibition (DATE)},
  year={2021},
  pages={1310-1313}
}
  • Chih-Hong Cheng
  • Published 2021
  • Computer Science
  • 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE)
For deep neural networks (DNNs) to be used in safety-critical autonomous driving tasks, it is desirable to monitor in operation time if the input for the DNN is similar to the data used in DNN training. While recent results in monitoring DNN activation patterns provide a sound guarantee due to building an abstraction out of the training data set, reducing false positives due to slight input perturbation has been an issue towards successfully adapting the techniques. We address this challenge by… Expand

Figures from this paper

Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian Approximation of Hidden Features
TLDR
This work presents a new approach to Bayesian approximation of Hidden Features of Deep Neural Networks that addresses the challenge of directly simulating the dynamic response of the TSP. Expand
Customizable Reference Runtime Monitoring of Neural Networks using Resolution Boxes
TLDR
This work presents an approach for the runtime verification of classification systems via data abstraction, and shows how to automatically construct monitors that make use of both the correct and incorrect behaviors of a classification system. Expand

References

SHOWING 1-10 OF 17 REFERENCES
Runtime Monitoring Neuron Activation Patterns
For using neural networks in safety critical domains, it is important to know if a decision made by a neural network is supported by prior similarities in training. We propose runtime neuronExpand
AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation
TLDR
This work presents AI2, the first sound and scalable analyzer for deep neural networks, and introduces abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling layers. Expand
A Safety Framework for Critical Systems Utilising Deep Neural Networks
TLDR
A principled novel safety argument framework for critical systems that utilise deep neural networks and allows various forms of predictions, e.g., future reliability of passing some demands, or confidence on a required reliability level. Expand
Star-Based Reachability Analysis of Deep Neural Networks
TLDR
This paper proposes novel reachability algorithms for both exact (sound and complete) and over-approximation (sound) analysis of deep neural networks (DNNs) that uses star sets as a symbolic representation of sets of states to provide an effective representation of high-dimensional polytopes. Expand
Towards Safety Verification of Direct Perception Neural Networks
TLDR
This work approaches the specification problem by learning an input property characterizer which carefully extends a direct perception neural network at close-to-output layers, and addresses the scalability problem by a novel assume-guarantee based verification approach. Expand
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
TLDR
This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Expand
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
TLDR
This work shows how a simple bounding technique, interval bound propagation (IBP), can be exploited to train large provably robust neural networks that beat the state-of-the-art in verified accuracy and allows the largest model to be verified beyond vacuous bounds on a downscaled version of ImageNet. Expand
Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks
TLDR
This work presents a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP), which works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients. Expand
Rule-Based Safety Evidence for Neural Networks
TLDR
This position paper proposes the use of rules extracted from neural networks as artefacts for safety evidence and discusses the rationale behind the use and illustrates it using the MNIST dataset. Expand
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. Expand
...
1
2
...