• Corpus ID: 202540506

Hardening of Artificial Neural Networks for Use in Safety-Critical Applications - A Mapping Study

  title={Hardening of Artificial Neural Networks for Use in Safety-Critical Applications - A Mapping Study},
  author={Rasmus Adler and Mohammed Naveed Akram and Pascal Bauer and Patrik Feth and Pascal Gerber and Andreas Jedlitschka and Lisa J{\"o}ckel and Michael Kl{\"a}s and Daniel Schneider},
Context: Across different domains, Artificial Neural Networks (ANNs) are used more and more in safety-critical applications in which erroneous outputs of such ANN can have catastrophic consequences. However, the development of such neural networks is still immature and good engineering practices are missing. With that, ANNs are in the same position as software was several decades ago. Today, standards for functional safety, such as ISO 26262 in the automotive domain, require the application of… 
3 Citations

Figures and Tables from this paper

Sources of Risk of AI Systems

The differences between AI systems, especially those based on modern machine learning methods, and classical software were analysed, and the current research fields of trustworthy AI were evaluated, and a taxonomy was created that provides an overview of various AI-specific sources of risk.

Using Complementary Risk Acceptance Criteria to Structure Assurance Cases for Safety-Critical AI Components

This paper proposes considering two complementary types of risk acceptance criteria as assurance objectives and provide, for each objective, a structure for the supporting argument.

Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks

This work seeks to dive into the safety concerns of deep learning methods and present a concise enumeration on a deeply technical level and gives an outlook regarding what mitigation methods are still missing in order to facilitate an argumentation for the safety of a deep learning method.



Application of Neural Networks in High Assurance Systems: A Survey

The application of neural networks in high assurance systems that have emerged in various fields, which include flight control, chemical engineering, power plants, automotive control, medical systems, and other systems that require autonomy are surveyed.

Establishing Safety Criteria for Artificial Neural Networks

This paper defines the safety criteria which if enforced, would contribute to justifying the safety of neural networks and highlights the challenge of maintaining performance in terms of adaptability and generalisation whilst providing acceptable safety arguments.

Verification and validation of neural networks: a sampling of research in progress

This paper describes several of these current trends and assesses their compatibility with traditional V&V techniques, including adaptive neural network systems; ones that modify themselves, or "learn," during operation.

Exploiting Safety Constraints in Fuzzy Self-organising Maps for Safety Critical Applications

A constrained Artificial Neural Network that can be employed for highly-dependable roles in safety critical applications that is based upon the Fuzzy Self-Organising Map and preserves valuable performance characteristics for non-linear function approximation problems.

Aircraft fault diagnosis and decision system based on improved artificial neural networks

The goal of this work is to build an aircraft fault diagnosis and decision system, which uses data-driven methods to automatically detect and isolate faults in the aircraft, while keeping its

Challenges in Certification of Autonomous Driving Systems

  • F. FalciniG. Lami
  • Computer Science
    2017 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)
  • 2017
In this paper the open issues in certification of AI technologies in automotive are addressed by providing an overview of the existing standards and the related applicability issues.

Neural networks for safety-critical applications — Challenges, experiments and perspectives

We propose a methodology for designing dependable Artificial Neural Networks (ANNs) by extending the concepts of understandability, correctness, and validity that are crucial ingredients in existing

NeVer: a tool for artificial neural networks verification

The main verification algorithm and the structure of NeVer, the tool for checking safety of ANNs, are described and empirical results confirming the effectiveness of Ne Ver are presented on realistic case studies.

Point-Wise Confidence Interval Estimation by Neural Networks: A Comparative Study based on Automotive Engine Calibration

Three distinct methods of producing point-wise confidence intervals using neural networks are explored, comparing and contrast Bayesian, Gaussian Process and Predictive error bars evaluated on real data.

An Approach to V&V of Embedded Adaptive Systems

This paper proposes a non-conventional V&V approach suitable for online adaptive systems and applies it to an intelligent flight control system that employs a particular type of Neural Networks (NN) as the adaptive learning paradigm.