• Corpus ID: 239050385

Human-Centered Explainable AI (XAI): From Algorithms to User Experiences

  title={Human-Centered Explainable AI (XAI): From Algorithms to User Experiences},
  author={Qingzi Vera Liao and Kush R. Varshney},
In recent years, the field of explainable AI (XAI) has produced a vast collection of algorithms, providing a useful toolbox for researchers and practitioners to build XAI applications. With the rich application opportunities, explainability is believed to have moved beyond a demand by data scientists or researchers to comprehend the models they develop, to an essential requirement for people to trust and adopt AI deployed in numerous domains. However, explainability is an inherently human… 

Figures and Tables from this paper

Towards Human-centered Explainable AI: User Studies for Model Explanations

This survey shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences.

Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI

A perspective of contextualized XAI evaluation is introduced by considering the relative importance ofXAI evaluation criteria for prototypical usage contexts of XAI, and a nuanced understanding of user requirements for XAI in different usage contexts is provided.

XAI Systems Evaluation: A Review of Human and Computer-Centred Methods

A new taxonomy to categorize XAI evaluation methods more clearly and intuitively is proposed, which gathers knowledge from different disciplines and organizes the evaluation methods according to a set of categories that represent key properties of XAI systems.

Transcending XAI Algorithm Boundaries through End-User-Inspired Design

This work shows that grounding the technical problem in end users’ use of XAI can inspire new research questions, which have the potential to promote social good by democratizing AI and ensuring the responsible use of AI in critical domains.

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

A study of a real-world AI application via interviews with 20 end-users of Merlin, a bird-identification app, finds that people express a need for practically useful information that can improve their collaboration with the AI system and intend to use XAI explanations for calibrating trust, improving their task skills, changing their behavior to supply better inputs to theAI system, and giving constructive feedback to developers.

Making AI Explainable in the Global South: A Systematic Review

This paper contributes the first systematic review of XAI research in the Global South, identifying 16 papers from 15 different venues that targeted a wide range of application domains and highlighting the need for human-centered approaches to XAI in theGlobal South.

Investigating Explainability of Generative AI for Code through Scenario-based Design

This work explores explainability needs for GenAI for code and demonstrates how human-centered approaches can drive the technical development of XAI in novel domains.

Leveraging Explanations in Interactive Machine Learning: An Overview

A conceptual map of research where explanations are combined with interactive capabilities as a mean to learn new models from scratch and to edit and debug existing ones is drawn, highlighting similarities and differences between them.

Explainable Artificial Intelligence: Precepts, Methods, and Opportunities for Research in Construction

A narrative review of XAI is provided to raise awareness about its potential in construction and develops a taxonomy of the XAI literature, comprising its precepts and approaches, including transparent and opaque models and post-hoc explainability.

On the Influence of Cognitive Styles on Users' Understanding of Explanations

This study draws on the psychological construct of cognitive styles that describe humans’ characteristic modes of processing information to investigate how users’ rational and intuitive cognitive styles affect their objective and subjective understanding of different types of explanations provided by an AI.



Questioning the AI: Informing Design Practices for Explainable AI User Experiences

An algorithm-informed XAI question bank is developed in which user needs for explainability are represented as prototypical questions users might ask about the AI, and used as a study probe to identify gaps between current XAI algorithmic work and practices to create explainable AI products.

Question-Driven Design Process for Explainable AI User Experiences

This work provides a mapping guide between prototypical user questions and exemplars of XAI techniques, serving as boundary objects to support collaboration between designers and AI engineers, and proposes a Question-Driven Design Process to tackle these challenges.

Operationalizing Human-Centered Perspectives in Explainable AI

How human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels is examined to produce actionable frameworks, transferable evaluation methods, concrete design guidelines, and articulate a coordinated research agenda for XAI.

Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers

An empirical study comparing the model learning outcomes, feedback content and experience with XAL, to that of traditional AL and coactive learning (providing the model’s prediction without explanation), and potential drawbacks–anchoring effect with the model judgment and additional cognitive workload.

Human-AI Collaboration for UX Evaluation: Effects of Explanation and Synchronization

Qualitative and qualitative results show that AI with explanations, regardless of being presented synchronously or asynchronously, provided better support for UX evaluators' analysis and was perceived more positively; when without explanations, synchronous AI better improved UX evalUators' performance and engagement compared to the asynchronous AI.

Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

This study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.

Automated rationale generation: a technique for explainable AI and its effects on human perceptions

Alignment between the intended differences in features of the generated rationales and the perceived differences by users is found, and context permitting, participants preferred detailed rationales to form a stable mental model of the agent's behavior.

AI Explainability 360: An Extensible Toolkit for Understanding Data and Machine Learning Models

This work introduces AI Explainability 360, an open-source Python toolkit featuring ten diverse and state-of-the-art explainability methods and two evaluation metrics and provides a taxonomy to help entities requiring explanations to navigate the space of interpretation and explanation methods.