• Corpus ID: 237490289

An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability

  title={An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability},
  author={Francesco Sovrano and Fabio Vitali},
Explainable AI was born as a pathway to allow humans to explore and understand the inner working of complex systems. But establishing what is an explanation and objectively evaluating explainability , are not trivial tasks. With this paper, we present a new model-agnostic metric to measure the Degree of Explainability of information in an objective way, exploiting a specific theoretical model from Ordinary Language Philosophy called the Achinstein’s Theory of Explanations , implemented with an… 



The Philosophy of Rudolf Carnap

Part of a series of studies of contemporary philosophers, this volume focuses on Rudolf Carnap.

Questioning the AI: Informing Design Practices for Explainable AI User Experiences

An algorithm-informed XAI question bank is developed in which user needs for explainability are represented as prototypical questions users might ask about the AI, and used as a study probe to identify gaps between current XAI algorithmic work and practices to create explainable AI products.

Multilingual Universal Sentence Encoder for Semantic Retrieval

On transfer learning tasks, the multilingual embeddings approach, and in some cases exceed, the performance of English only sentence embedDings.

Induction: Processes Of Inference

Metrics for Explainable AI: Challenges and Prospects

This paper discusses specific methods for evaluating the goodness of explanations, whether users are satisfied by explanations, how well users understand the AI systems, and how the human-XAI work system performs.

Induction: Processes of Inference, Learning, and Discovery

Induction is the first major effort to bring the ideas of several disciplines to bear on a subject that has been a topic of investigation since the time of Socrates and is included in the Computational Models of Cognition and Perception Series.

Dense passage retrieval for open-domain F. Sovrano et al.: Preprint submitted to Elsevier Page 21 of 23 An Objective Metric for Explanations and Explainable AI question answering

  • Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020,
  • 2020

From Philosophy to Interfaces: an Explanatory Method and a Tool Inspired by Achinstein’s Theory of Explanation

It is shown that a new approach for the generation of interactive explanations based on a sophisticated pipeline of AI algorithms for structuring natural language documents into knowledge graphs, answering questions effectively and satisfactorily is shown.

Legal Knowledge Extraction for Knowledge Graph Based Question-Answering

The Open Knowledge Extraction tools combined with natural language analysis of the sentence in order to enrich the semantic of the legal knowledge extracted from legal text is presented.

Legal requirements on explainability in machine learning

The increasing number of legal requirements on machine learning model interpretability and explainability in the context of private and public decision making are presented and how those legal requirements can be implemented into machine-learning models are explained.