• Corpus ID: 239050385

Human-Centered Explainable AI (XAI): From Algorithms to User Experiences

@article{Liao2021HumanCenteredEA,
  title={Human-Centered Explainable AI (XAI): From Algorithms to User Experiences},
  author={Qingzi Vera Liao and Kush R. Varshney},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.10790}
}
(Book Chapter Draft 10/2021) As a technical sub-field of artificial intelligence (AI), explainable AI (XAI) has produced a vast collection of algorithms, providing a toolbox for researchers and practitioners to build XAI applications. With the rich application opportunities, explainability has moved beyond a demand by data scientists or researchers to comprehend the models they develop, to become an essential requirement for people to trust and adopt AI deployed in numerous domains. However… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 106 REFERENCES
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
TLDR
An algorithm-informed XAI question bank is developed in which user needs for explainability are represented as prototypical questions users might ask about the AI, and used as a study probe to identify gaps between current XAI algorithmic work and practices to create explainable AI products.
Operationalizing Human-Centered Perspectives in Explainable AI
TLDR
How human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels is examined to produce actionable frameworks, transferable evaluation methods, concrete design guidelines, and articulate a coordinated research agenda for XAI.
Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers
The wide adoption of Machine Learning (ML) technologies has created a growing demand for people who can train ML models. Some advocated the term “machine teacher” to refer to the role of people who
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
TLDR
This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
TLDR
Previous efforts to define explainability in Machine Learning are summarized, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought, and a taxonomy of recent contributions related to the explainability of different Machine Learning models are proposed.
Automated rationale generation: a technique for explainable AI and its effects on human perceptions
TLDR
Alignment between the intended differences in features of the generated rationales and the perceived differences by users is found, and context permitting, participants preferred detailed rationales to form a stable mental model of the agent's behavior.
AI Explainability 360: An Extensible Toolkit for Understanding Data and Machine Learning Models
TLDR
This work introduces AI Explainability 360, an open-source Python toolkit featuring ten diverse and state-of-the-art explainability methods and two evaluation metrics and provides a taxonomy to help entities requiring explanations to navigate the space of interpretation and explanation methods.
Designing Theory-Driven User-Centric Explainable AI
TLDR
This paper proposes a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across philosophy and psychology, and identifies pathways along which human cognitive patterns drives needs for building XAI and how XAI can mitigate common cognitive biases.
Stakeholders in Explainable AI
TLDR
The software engineering distinction between validation and verification, and the epistemological distinctions between knowns/unknowns are used to tease apart the concerns of the stakeholder communities and highlight the areas where their foci overlap or diverge.
Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach
TLDR
This paper introduces Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design and develops a holistic understanding of "who" the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems.
...
1
2
3
4
5
...