Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering

@article{Attaoui2022BlackboxSA,
  title={Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering},
  author={Mohammed Oualid Attaoui and Hazem Fahmy and Fabrizio Pastore and Lionel Claude Briand},
  journal={ACM Transactions on Software Engineering and Methodology},
  year={2022}
}
Deep neural networks (DNNs) have demonstrated superior performance over classical machine learning to support many features in safety-critical systems. Although DNNs are now widely used in such systems (e.g., self driving cars), there is limited progress regarding automated support for functional safety analysis in DNN-based systems. For example, the identification of root causes of errors, to enable both risk analysis and DNN retraining, remains an open problem. In this paper, we propose SAFE… 

DNN Explanation for Safety Analysis: an Empirical Evaluation of Clustering-based Approaches

An empirical evaluation of 99 different pipelines for root cause analysis of DNN failures shows that the best pipeline combines transfer learning, DBSCAN, and UMAP and generates distinct clusters for each root cause of failure, thus enabling engineers to detect all the unsafe scenarios.

Black-Box Testing of Deep Neural Networks through Test Case Diversity

Black-box input diversity metrics are investigated as an alternative to white-box coverage criteria for DNN testing and show that relying on the diversity of image features embedded in test input sets is a more reliable indicator than coverage criteria to effectively guide the testing of DNNs.

QuoTe: Quality-oriented Testing for Deep Learning Systems

This work proposes a novel testing framework called QuoTe (i.e., Quality-oriented Testing), a generic quality-oriented testing framework that utilizes the proposed metric to automatically select or generate valuable test cases for improving model quality.

When and Why Test Generators for Deep Learning Produce Invalid Inputs: an Empirical Study

This paper investigates to what extent TIGs can generate valid inputs, according to both automated and human validators, and shows that 84% artificially generated inputs are valid, but their expected label is not always preserved.

DeepGD: A Multi-Objective Black-Box Test Selection Approach for Deep Neural Networks

DeepGD is proposed, a black-box multi-objective test selection approach for DNN models that reduces the cost of labeling by prioritizing the selection of test inputs with high fault revealing power from large unlabeled datasets.

References

SHOWING 1-10 OF 85 REFERENCES

Supporting Deep Neural Network Safety Analysis and Retraining Through Heatmap-Based Unsupervised Learning

Heatmap-based unsupervised debugging of DNNs (HUDD) is proposed, an approach that automatically supports the identification of root causes for DNN errors and is shown to be more effective at improving DNN accuracy than existing approaches.

DeepGini: prioritizing massive tests to enhance the robustness of deep neural networks

DeepGini, a test prioritization technique designed based on a statistical perspective of DNN, is proposed that outperforms existing coverage-based techniques in prioritizing tests regarding both effectiveness and efficiency and observes that the tests prioritized at the front by DeepGini are more effective in improving the DNN quality in comparison with the coverage- based techniques.

Automatic test suite generation for key-points detection DNNs using many-objective search (experience paper)

This paper presents an approach to automatically generate test data for KP-DNNs using many-objective search, and investigates and demonstrates how to learn specific conditions, based on image characteristics, that lead to severe mispredictions.

DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars

DeepTest is a systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles that can potentially lead to fatal crashes and systematically explore different parts of the DNN logic by generating test inputs that maximize the numbers of activated neurons.

RISE: Randomized Input Sampling for Explanation of Black-box Models

The problem of Explainable AI for deep neural networks that take images as input and output a class probability is addressed and an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction is proposed.

AUTOTRAINER: An Automatic DNN Training Problem Detection and Repair System

AUTOTRAINER is proposed, a DNN training monitoring and automatic repairing tool which supports detecting and auto repairing five commonly seen training problems and can effectively detect all potential problems with 100% detection rate and no false positives.

Testing DNN-based Autonomous Driving Systems under Critical Environmental Conditions

This paper presents a novel approach named TACTIC that employs the search-based method to identify critical environmental conditions generated by an image-toimage translation model and can effectively identifycritical environmental conditions and produce realistic testing images, and meanwhile, reveal more erroneous behaviours compared to existing approaches.

AI-Lancet: Locating Error-inducing Neurons to Optimize Neural Networks

A novel and systematic approach to trace and fix the errors in deep learning models by locating the error-inducing neurons that play a leading role in the erroneous output and proposing the neuron-flip and neuron-fine-tuning methods.
...