Exploring the Assessment List for Trustworthy AI in the Context of Advanced Driver-Assistance Systems

  title={Exploring the Assessment List for Trustworthy AI in the Context of Advanced Driver-Assistance Systems},
  author={Markus Borg and Joshua Bronson and Linus Christensson and Fredrik Olsson and Olof Lennartsson and Elias Sonnsj{\"o} and Hamid Ebabi and Martin Karsberg},
  journal={2021 IEEE/ACM 2nd International Workshop on Ethics in Software Engineering Research and Practice (SEthics)},
Artificial Intelligence (AI) is increasingly used in critical applications. Thus, the need for dependable AI systems is rapidly growing. In 2018, the European Commission appointed experts to a High-Level Expert Group on AI (AI-HLEG). AI- HLEG defined Trustworthy AI as 1) lawful, 2) ethical, and 3) robust and specified seven corresponding key requirements. To help development organizations, AI-HLEG recently published the Assessment List for Trustworthy AI (ALTAI). We present an illustrative case… 

Figures from this paper

Ergo, SMIRK is Safe: A Safety Case for a Machine Learning Component in a Pedestrian Automatic Emergency Brake System
An industry-academia collaboration on safety assurance of SMIRK, an ML-based pedestrian automatic emergency braking demonstrator running in an industry-grade simulator, and the outcome of applying AMLAS onSMIRK for a minimalistic operational design domain is presented.
Mining the ambient commons: building interdisciplinary connections between environmental knowledge, AI and creative practice research
ABSTRACT According to Brooks [2017. “The Big Problem with Self-driving Cars Is People”. IEEE Spectrum: Technology, Engineering, and Science News], artificial intelligence has had a variable
Exploring ML testing in practice – Lessons learned from an interactive rapid review with Axis Communications
A taxonomy for the communication around ML testing challenges and results was developed and a list of 12 review questions relevant for Axis Communications was identified and extracted relevant approaches from the five studies on a conceptual level to support later context-specific improvements.
Machine Learning Testing in an ADAS Case Study Using Simulation-Integrated Bio-Inspired Search-Based Testing
Evaluation shows the newly proposed test generators in Deeper not only represent a considerable improvement on the previous version but also prove to be effective and efficient in provoking a considerable number of diverse failure-revealing test scenarios for testing an ML-driven lane-keeping system.


The EU Approach to Ethics Guidelines for Trustworthy Artificial Intelligence
As part of its European strategy for Artificial Intelligence (AI), and as a response to the increasing ethical questions raised by this technology, the European Commission established an independent
Safely Entering the Deep: A Review of Verification and Validation for Machine Learning and a Challenge Elicitation in the Automotive Industry
The state-of-the-art in verification and validation of safety-critical systems that rely on machine learning is reviewed, confirming that ISO 26262 largely contravenes the nature of DNNs.
Requirements Engineering for Machine Learning: Perspectives from Data Scientists
It is concluded that development of ML systems demands requirements engineers to understand ML performance measures to state good functional requirements, be aware of new quality requirements such as explainability, freedom from discrimination, or specific legal requirements, and integrate ML specifics in the RE process.
Testing Vision-Based Control Systems Using Learnable Evolutionary Algorithms
This work proposes an automated testing algorithm that builds on learnable evolutionary algorithms that outperforms a baseline evolutionary search algorithm and generates 78% more distinct, critical test scenarios compared to the baseline algorithm.
Artificial Intelligence and the GDPR: Inevitable Nemeses?
The Gdpr suffers in terms of efficacy in the context of artificial intelligence-based technologies, and full compliance of data controllers and processors employing such technologies is unlikely to be achieved, and legislative amendments are proposed as an effective method of mitigating these drawbacks.
Systematic Pattern Approach for Safety and Security Co-engineering in the Automotive Domain
This work proposes a systematic pattern-based approach that interlinks safety and security patterns and provides guidance with respect to selection and combination of both types of patterns in context of system engineering.
Tool support for assurance case development
This paper describes how AdvoCATE is being engineered atop formal foundations for assurance case argument structures, to provide unique capabilities for automated creation and assembly of assurance arguments and integration of formal methods into wider assurance arguments.
AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
A new open source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license to help facilitate the transition of fairness research algorithms to use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms.
Federated Learning for Vehicular Internet of Things: Recent Advances and Open Issues
The significance and technical challenges of applying FL in vehicular IoT, and future research directions are discussed, and a brief survey of existing studies on FL and its use in wireless IoT is conducted.