Outside the Box: Abstraction-Based Monitoring of Neural Networks

@article{Henzinger2020OutsideTB,
  title={Outside the Box: Abstraction-Based Monitoring of Neural Networks},
  author={Thomas A. Henzinger and Anna Lukina and Christian Schilling},
  journal={ArXiv},
  year={2020},
  volume={abs/1911.09032}
}
Neural networks have demonstrated unmatched performance in a range of classification tasks. Despite numerous efforts of the research community, novelty detection remains one of the significant limitations of neural networks. The ability to identify previously unseen inputs as novel is crucial for our understanding of the decisions made by neural networks. At runtime, inputs not falling into any of the categories learned during training cannot be classified correctly by the neural network… 

Figures and Tables from this paper

Customizable Reference Runtime Monitoring of Neural Networks using Resolution Boxes
TLDR
This work presents an approach for the runtime verification of classification systems via data abstraction, and shows how to automatically construct monitors that make use of both the correct and incorrect behaviors of a classification system.
Into the unknown: Active monitoring of neural networks
TLDR
This work proposes an algorithmic framework for active monitoring of neural-network classifiers that allows for their deployment in dynamic environments where unknown input classes appear frequently, and adjusts the framework to novel inputs incrementally, thereby improving short-term reliability of the classification.
Provably-Robust Runtime Monitoring of Neuron Activation Patterns
  • Chih-Hong Cheng
  • Computer Science
    2021 Design, Automation & Test in Europe Conference & Exhibition (DATE)
  • 2021
TLDR
This work addresses the challenge of reducing false positives due to slight input perturbation in monitoring DNN activation patterns by integrating formal symbolic reasoning inside the monitor construction process.
Hack The Box: Fooling Deep Learning Abstraction-Based Monitors
TLDR
It is demonstrated that novelty detection itself ends up as an attack surface when crafting adversarial samples that fool the deep learning classifier and bypass the novelty detection monitoring at the same time.
Delft University of Technology Active Monitoring of Neural Networks
TLDR
An algorithmic framework for monitoring reliability of a neural network and a monitor wrapped in this framework operates in parallel with the classifier, communicates interpretable labeling queries to the human user, and incrementally adapts to their feedback.
Gaussian-Based Runtime Detection of Out-of-distribution Inputs for Neural Networks
TLDR
A simple approach for runtime monitoring of deep neural networks and how to use it for out-of-distribution detection based on inferring Gaussian models of some of the neurons and layers is introduced.
Continuous Safety Verification of Neural Networks
TLDR
This paper considers approaches to transfer results established in the previous DNN safety verification problem to the modified problem setting and develops several sufficient conditions that only require formally analyzing a small part of the DNN in the new problem.
Model Assertions for Monitoring and Improving ML Models
TLDR
This work proposes a new abstraction, model assertions, that adapts the classical use of program assertions as a way to monitor and improve ML models and proposes an API for generating "consistency assertions" and weak labels for inputs where the consistency assertions fail.
Run-Time Monitoring of Machine Learning for Robotic Perception: A Survey of Emerging Trends
TLDR
This paper attempts to identify trends emerging in the literature in the face of run-time monitoring of performance and reliability of perception systems and summarize the various approaches to the topic.
Monitoring Object Detection Abnormalities via Data-Label and Post-Algorithm Abstractions
TLDR
This paper develops abstraction-based monitoring as a logical framework for filtering potentially erroneous detection results and considers two types of abstraction, namely data-label abstraction and post-algorithm abstraction.
...
...

References

SHOWING 1-10 OF 71 REFERENCES
Runtime Monitoring Neuron Activation Patterns
For using neural networks in safety critical domains, it is important to know if a decision made by a neural network is supported by prior similarities in training. We propose runtime neuron
Selective Classification for Deep Neural Networks
TLDR
A method to construct a selective classifier given a trained neural network, which allows a user to set a desired risk level and the classifier rejects instances as needed, to grant the desired risk (with high probability).
KS(conf ): A Light-Weight Test if a ConvNet Operates Outside of Its Specifications
TLDR
KS(conf) is described, a procedure for detecting out-of-specs situations that is easy to implement, adds almost no overhead to the system, works with all networks, including pretrained ones, and requires no a priori knowledge about how the data distribution could change.
Predicting Failures of Vision Systems
TLDR
This work shows that a surprisingly straightforward and general approach, that is ALERT, can predict the likely accuracy (or failure) of a variety of computer vision systems - semantic segmentation, vanishing point and camera parameter estimation, and image memorability prediction - on individual input images.
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
TLDR
This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates.
Model Assertions for Debugging Machine Learning
TLDR
This work proposes several ways to use model assertions in ML debugging, including use in runtime monitoring, in performing corrective actions, and in collecting “hard examples” to further train models with human labeling or weak supervision.
On Calibration of Modern Neural Networks
TLDR
It is discovered that modern neural networks, unlike those from a decade ago, are poorly calibrated, and on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
Adversarially Learned One-Class Classifier for Novelty Detection
TLDR
The results on MNIST and Caltech-256 image datasets, along with the challenging UCSD Ped2 dataset for video anomaly detection illustrate that the proposed method learns the target class effectively and is superior to the baseline and state-of-the-art methods.
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
TLDR
A simple baseline that utilizes probabilities from softmax distributions is presented, showing the effectiveness of this baseline across all computer vision, natural language processing, and automatic speech recognition, and it is shown the baseline can sometimes be surpassed.
...
...