A Survey of Explainable AI Terminology

@article{Clinciu2019ASO,
  title={A Survey of Explainable AI Terminology},
  author={Miruna Clinciu and H. Hastie},
  journal={Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019)},
  year={2019}
}
  • Miruna Clinciu, H. Hastie
  • Published 2019
  • Computer Science
  • Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019)
The field of Explainable Artificial Intelligence attempts to solve the problem of algorithmic opacity. Many terms and notions have been introduced recently to define Explainable AI, however, these terms seem to be used interchangeably, which is leading to confusion in this rapidly expanding field. As a solution to overcome this problem, we present an analysis of the existing research literature and examine how key terms, such as transparency, intelligibility, interpretability, and… 

Figures and Tables from this paper

What Do We Want From Explainable Artificial Intelligence (XAI)?
TLDR
A model is provided that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders’ desiderata and can serve researchers from the variety of different disciplines involved in XAI as a common ground.
A Study of Automatic Metrics for the Evaluation of Natural Language Explanations
TLDR
The ExBAN corpus is presented: a crowd-sourced corpus of NL explanations for Bayesian Networks and it is found that embedding-based automatic NLG evaluation methods have a higher correlation with human ratings, compared to word-overlap metrics, such as BLEU and ROUGE.
Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing
TLDR
This review identifies 61 datasets with three predominant classes of textual expla6 nations (highlights, free-text, and structured), organize the literature on annotating each type, identify strengths and shortcomings of existing collection methodologies, and give recommendations for collecting EXNLP datasets in the future.
A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems
TLDR
A framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams is developed and summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research are provided.
Teach Me to Explain: A Review of Datasets for Explainable NLP
TLDR
This review identifies three predominant classes of explanations (highlights, free-text, and structured), organize the literature on annotating each type, point to what has been learned to date, and give recommendations for collecting EXNLP datasets in the future.
Qualitative Investigation in Explainable Artificial Intelligence: A Bit More Insight from Social Science
TLDR
The analysis draws on social science corpora to suggest ways for improving the rigor of studies where XAI researchers use observations, interviews, focus groups, and/or questionnaires to capture qualitative data.
On the Relation of Trust and Explainability: Why to Engineer for Trustworthiness
TLDR
It is argued that even though trustworthiness does not automatically lead to trust, there are several reasons to engineer primarily for trustworthiness – and that a system’s explainability can crucially contribute to its trustworthiness.
Explanation Generation in a Kabuki Dance Stage Performing Structure Simulation System
TLDR
This paper introduces the generation of explanations into the system, and prototyped a mechanism wherein the system automatically determines the content and method of an explanation based on arbitrary parameters.
Toward Explanation-Centered Story Generation
TLDR
This study proposes the need for explanation-centered story generation, and proposes a mechanism for the story generation system to generate multiple forms of stories in this manner.
...
...

References

SHOWING 1-10 OF 45 REFERENCES
Explaining Explanations: An Overview of Interpretability of Machine Learning
There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide
Can we do better explanations? A proposal of user-centered explainable AI
TLDR
A new explainability pipeline is suggested, where users are classified in three main groups (developers or AI researchers, domain experts and lay users), inspired by the cooperative principles of conversations, to overcome some of the difficulties related to creating good explanations and evaluating them.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
TLDR
This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Explanation in Expert Systems : A Survey
TLDR
This survey reviews early approaches to explanation in expert systems and discusses their limitations, and argues that further improvements in explanation require better generation techniques.
Interpretable machine learning: definitions, methods, and applications
TLDR
This paper first defines interpretability in the context of machine learning and place it within a generic data science life cycle, and introduces the Predictive, Descriptive, Relevant (PDR) framework, consisting of three desiderata for evaluating and constructing interpretations.
Intelligible Artificial Intelligence
Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural networks to fit mammoth datasets, it often results in complex
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
TLDR
A model is described that identifies different roles that agents can fulfill in relation to the machine learning system, by identifying how an agent’s role influences its goals, and the implications for defining interpretability.
Model-Agnostic Interpretability of Machine Learning
TLDR
This paper argues for explaining machine learning predictions using model-agnostic approaches, treating the machine learning models as black-box functions, which provide crucial flexibility in the choice of models, explanations, and representations, improving debugging, comparison, and interfaces for a variety of users and models.
Manipulating and Measuring Model Interpretability
TLDR
A sequence of pre-registered experiments showed participants functionally identical models that varied only in two factors commonly thought to make machine learning models more or less interpretable: the number of features and the transparency of the model (i.e., whether the model internals are clear or black box).
...
...