A Survey of Explainable AI Terminology
@article{Clinciu2019ASO, title={A Survey of Explainable AI Terminology}, author={Miruna Clinciu and H. Hastie}, journal={Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019)}, year={2019} }
The field of Explainable Artificial Intelligence attempts to solve the problem of algorithmic opacity. Many terms and notions have been introduced recently to define Explainable AI, however, these terms seem to be used interchangeably, which is leading to confusion in this rapidly expanding field. As a solution to overcome this problem, we present an analysis of the existing research literature and examine how key terms, such as transparency, intelligibility, interpretability, and…
12 Citations
What Do We Want From Explainable Artificial Intelligence (XAI)?
- Computer Science
- 2021
A model is provided that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders’ desiderata and can serve researchers from the variety of different disciplines involved in XAI as a common ground.
A Study of Automatic Metrics for the Evaluation of Natural Language Explanations
- Computer ScienceEACL
- 2021
The ExBAN corpus is presented: a crowd-sourced corpus of NL explanations for Bayesian Networks and it is found that embedding-based automatic NLG evaluation methods have a higher correlation with human ratings, compared to word-overlap metrics, such as BLEU and ROUGE.
Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing
- Computer ScienceNeurIPS Datasets and Benchmarks
- 2021
This review identifies 61 datasets with three predominant classes of textual expla6 nations (highlights, free-text, and structured), organize the literature on annotating each type, identify strengths and shortcomings of existing collection methodologies, and give recommendations for collecting EXNLP datasets in the future.
What Do We Want From Explainable Artificial Intelligence (XAI)? - A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
- Computer ScienceArtif. Intell.
- 2021
A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems
- Computer ScienceACM Transactions on Interactive Intelligent Systems
- 2021
A framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams is developed and summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research are provided.
Teach Me to Explain: A Review of Datasets for Explainable NLP
- Computer ScienceArXiv
- 2021
This review identifies three predominant classes of explanations (highlights, free-text, and structured), organize the literature on annotating each type, point to what has been learned to date, and give recommendations for collecting EXNLP datasets in the future.
Qualitative Investigation in Explainable Artificial Intelligence: A Bit More Insight from Social Science
- Computer ScienceArXiv
- 2020
The analysis draws on social science corpora to suggest ways for improving the rigor of studies where XAI researchers use observations, interviews, focus groups, and/or questionnaires to capture qualitative data.
On the Relation of Trust and Explainability: Why to Engineer for Trustworthiness
- Business, Computer Science2021 IEEE 29th International Requirements Engineering Conference Workshops (REW)
- 2021
It is argued that even though trustworthiness does not automatically lead to trust, there are several reasons to engineer primarily for trustworthiness – and that a system’s explainability can crucially contribute to its trustworthiness.
Explanation Generation in a Kabuki Dance Stage Performing Structure Simulation System
- Computer Science2020 International Conference on Computational Science and Computational Intelligence (CSCI)
- 2020
This paper introduces the generation of explanations into the system, and prototyped a mechanism wherein the system automatically determines the content and method of an explanation based on arbitrary parameters.
Toward Explanation-Centered Story Generation
- Business2020 IEEE Third International Conference on Artificial Intelligence and Knowledge Engineering (AIKE)
- 2020
This study proposes the need for explanation-centered story generation, and proposes a mechanism for the story generation system to generate multiple forms of stories in this manner.
References
SHOWING 1-10 OF 45 REFERENCES
Explaining Explanations: An Overview of Interpretability of Machine Learning
- Computer Science2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA)
- 2018
There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide…
Can we do better explanations? A proposal of user-centered explainable AI
- Computer ScienceIUI Workshops
- 2019
A new explainability pipeline is suggested, where users are classified in three main groups (developers or AI researchers, domain experts and lay users), inspired by the cooperative principles of conversations, to overcome some of the difficulties related to creating good explanations and evaluating them.
Explanation in Artificial Intelligence: Insights from the Social Sciences
- PsychologyArtif. Intell.
- 2019
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
- Computer ScienceIEEE Access
- 2018
This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Explanation in Expert Systems : A Survey
- Computer Science
- 1988
This survey reviews early approaches to explanation in expert systems and discusses their limitations, and argues that further improvements in explanation require better generation techniques.
Interpretable machine learning: definitions, methods, and applications
- Computer ScienceArXiv
- 2019
This paper first defines interpretability in the context of machine learning and place it within a generic data science life cycle, and introduces the Predictive, Descriptive, Relevant (PDR) framework, consisting of three desiderata for evaluating and constructing interpretations.
Intelligible Artificial Intelligence
- Computer ScienceArXiv
- 2018
Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural networks to fit mammoth datasets, it often results in complex…
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
- Computer ScienceArXiv
- 2018
A model is described that identifies different roles that agents can fulfill in relation to the machine learning system, by identifying how an agent’s role influences its goals, and the implications for defining interpretability.
Model-Agnostic Interpretability of Machine Learning
- Computer ScienceArXiv
- 2016
This paper argues for explaining machine learning predictions using model-agnostic approaches, treating the machine learning models as black-box functions, which provide crucial flexibility in the choice of models, explanations, and representations, improving debugging, comparison, and interfaces for a variety of users and models.
Manipulating and Measuring Model Interpretability
- Computer Science, PsychologyCHI
- 2021
A sequence of pre-registered experiments showed participants functionally identical models that varied only in two factors commonly thought to make machine learning models more or less interpretable: the number of features and the transparency of the model (i.e., whether the model internals are clear or black box).