Ergo, SMIRK is Safe: A Safety Case for a Machine Learning Component in a Pedestrian Automatic Emergency Brake System

@article{Borg2022ErgoSI,
  title={Ergo, SMIRK is Safe: A Safety Case for a Machine Learning Component in a Pedestrian Automatic Emergency Brake System},
  author={Markus Borg and Jens Henriksson and Kasper Socha and Olof Lennartsson and Elias Sonnsjo Lonegren and Thanh Binh Bui and Piotr Tomaszewski and Sankar Raman Sathyamoorthy and Sebastian Brink and Mahshid Helali Moghadam},
  journal={ArXiv},
  year={2022},
  volume={abs/2204.07874}
}
Integration of Machine Learning (ML) components in critical applications introduces novel challenges for software certification and verification. New safety standards and technical guidelines are under development to support the safety of ML-based systems, e.g., ISO 21448 SOTIF for the automotive domain and the Assurance of Machine Learning for use in Autonomous Systems (AMLAS) framework. SOTIF and AMLAS provide high-level guidance but the details must be chiseled out for each specific case. We… 

References

SHOWING 1-10 OF 86 REFERENCES

SMIRK: A machine learning-based pedestrian automatic emergency braking system with a complete safety case

Guidance on the Assurance of Machine Learning in Autonomous Systems (AMLAS)

TLDR
This document introduces a methodology for the Assurance of Machine Learning for use in Autonomous Systems (AMLAS), a process for systematically integrating safety assurance into the development of ML components and for generating the evidence base for explicitly justifying the acceptable safety of these components when integrated into autonomous system applications.

A Survey on Methods for the Safety Assurance of Machine Learning Based Systems

TLDR
This work provides a structured, certification oriented overview on available methods supporting the safety argumen-tation of a ML based system, sorted into life-cycle phases, and maturity of the approach as well as applicability to different ML types are collected.

Assuring the Safety of Machine Learning for Pedestrian Detection at Crossings

TLDR
This paper focuses on the elicitation and analysis of ML safety requirements and how such requirements should drive the assurance activities within the data management and model learning phases and explains the benefits of the approach and identifies outstanding challenges in the context of self-driving cars.

Understanding and Validity in Qualitative Research

Qualitative researchers rely — implicitly or explicitly — on a variety of understandings and corresponding types of validity in the process of describing, interpreting, and explaining phenomena of ...

Safety tactics for software architecture design

  • Weihang WuT. Kelly
  • Computer Science
    Proceedings of the 28th Annual International Computer Software and Applications Conference, 2004. COMPSAC 2004.
  • 2004
TLDR
This work presents a method for software architecture design within the context of safety, centred upon extending the existing notion of architectural tactics to include safety as a consideration and demonstrates the true value of deployment of specific protection mechanisms.

Exploring the Assessment List for Trustworthy AI in the Context of Advanced Driver-Assistance Systems

TLDR
The experience shows that ALTAI is largely applicable to ADAS development, but specific parts related to human agency and transparency can be disregarded and bigger questions related to societal and environmental impact cannot be tackled by an ADAS supplier in isolation.

Digital Twins Are Not Monozygotic – Cross-Replicating ADAS Testing in Two Industry-Grade Automotive Simulators

TLDR
A replication study of applying a Search-Based Software Testing (SBST) solution to a real-world ADAS (PeVi) using two different commercial simulators, namely, TASS/Siemens PreScan and ESI Pro-SiVIC.

Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges

TLDR
This paper provides a comprehensive survey of the state-of-the-art in the assurance of ML, i.e. in the generation of evidence that ML is sufficiently safe for its intended use, at different stages of the machine learning lifecycle.

Testing Autonomous Cars for Feature Interaction Failures using Many-Objective Search

TLDR
A technique to detect feature interaction failures by casting this problem into a search-based test generation problem, and a new search- based test generation algorithm, called FITEST, that is guided by the hybrid test objectives.
...