Into the unknown: Active monitoring of neural networks

@inproceedings{Lukina2021IntoTU,
  title={Into the unknown: Active monitoring of neural networks},
  author={Anna Lukina and Christian Schilling and Thomas A. Henzinger},
  booktitle={RV},
  year={2021}
}
Machine-learning techniques achieve excellent performance in modern applications. In particular, neural networks enable training classifiers, often used in safety-critical applications, to complete a variety of tasks without human supervision. Neural-network models have neither the means to identify what they do not know nor to interact with the human user before making a decision. When deployed in the real world, such models work reliably in scenarios they have seen during training. In… 

Customizable Reference Runtime Monitoring of Neural Networks using Resolution Boxes

This work presents an approach for the runtime verification of classification systems via data abstraction, and shows how to automatically construct monitors that make use of both the correct and incorrect behaviors of a classification system.

Unifying Evaluation of Machine Learning Safety Monitors

Three safety-oriented metrics are introduced, representing the safety benefits of the monitor, the remaining safety gaps after using it, and its negative impact on the system’s performance ( Availability Cost) are computed.

Correct-by-Construction Runtime Enforcement in AI - A Survey

The purpose of this paper is to foster a better understanding of advantages and limitations of enforcement techniques, focusing on the specific challenges that arise due to their application in AI.

Verification-Aided Deep Ensemble Selection

This case study harnesses recent advances in DNN verification to devise a methodology for identifying ensemble compositions that are less prone to simultaneous errors, even when the input is adversarially perturbed — resulting in more robustly-accurate ensemble-based classiﷁcation.

An Abstraction-Refinement Approach to Verifying Convolutional Neural Networks

The core of Cnn-Abs is an abstraction-refinement technique, which simplifies the verification problem through the removal of convolutional connections in a way that soundly creates an over-approximation of the original problem; and which restores these connections if the resulting problem becomes too abstract.

SpecRepair: Counter-Example Guided Safety Repair of Deep Neural Networks

SpecRepair is a tool that efficiently eliminates counter-examples from a DNN and produces a provably safe DNN without harming its classification accuracy, and is more successful in producingsafe DNNs than comparable methods, has a shorter runtime, and produces safe Dnns while preserving their classi-cation accuracy.

Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian Approximation of Hidden Features

This work presents a new approach to Bayesian approximation of Hidden Features of Deep Neural Networks that addresses the challenge of directly simulating the dynamic response of the TSP.

Provably-Robust Runtime Monitoring of Neuron Activation Patterns

  • Chih-Hong Cheng
  • Computer Science
    2021 Design, Automation & Test in Europe Conference & Exhibition (DATE)
  • 2021
This work addresses the challenge of reducing false positives due to slight input perturbation in monitoring DNN activation patterns by integrating formal symbolic reasoning inside the monitor construction process.

References

SHOWING 1-10 OF 61 REFERENCES

Runtime Monitoring Neuron Activation Patterns

For using neural networks in safety critical domains, it is important to know if a decision made by a neural network is supported by prior similarities in training. We propose runtime neuron

iCaRL: Incremental Classifier and Representation Learning

iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail, and distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures.

A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks

A simple baseline that utilizes probabilities from softmax distributions is presented, showing the effectiveness of this baseline across all computer vision, natural language processing, and automatic speech recognition, and it is shown the baseline can sometimes be surpassed.

Classifier adaptation at prediction time

This work describes a probabilistic method for adapting classifiers at prediction time without having to retrain them, and introduces a framework for creating realistically distributed image sequences, which offers a way to benchmark classifier adaptation methods, such as the one proposed.

Principal Component Analysis

  • H. Shen
  • Environmental Science
    Encyclopedia of Database Systems
  • 2009
The Karhunen-Lo eve basis functions, more frequently referred to as principal components or empirical orthogonal functions (EOFs), of the noise response of the climate system are an important tool

Kernel Principal Component Analysis

A new method for performing a nonlinear form of Principal Component Analysis by the use of integral operator kernel functions is proposed and experimental results on polynomial feature extraction for pattern recognition are presented.

Principal Component Analysis . Springer Series in Statistics

  • 1986

Classifier adaptation at prediction time. In: CVPR. pp. 1401–1409

  • IEEE Computer Society
  • 2015

A baseline for detecting misclassified and out-ofdistribution examples in neural networks. In: ICLR

  • 2017

  • 1986
...