Corpus ID: 236469173

The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations

  title={The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations},
  author={Upol Ehsan and Samir Passi and Qingzi Vera Liao and Larry Chan and I-Hsiang Lee and Michael J. Muller and Mark O. Riedl},
Explainability of AI systems is critical for users to take informed actions and hold systems accountable. While “opening the opaque box” is important, understanding who opens the box can govern if the Human-AI interaction is effective. In this paper, we conduct a mixed-methods study of how two different groups of whos—people with and without a background in AI—perceive different types of AI explanations. These groups were chosen to look at how disparities in AI backgrounds can exacerbate the… Expand
Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
This paper introduces explainability pitfalls (EPs), unanticipated negative downstream effects from AI explanations manifesting even when there is no intention to manipulate users, by demarcating it from dark patterns and highlighting the challenges arising from uncertainties around pitfalls. Expand
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
This chapter begins with a high-level overview of the technical landscape of XAI algorithms, then selectively survey recent HCI work that takes human-centered approaches to design, evaluate, provide conceptual and methodological tools for XAI, and highlights three roles that they should play in shaping XAI technologies. Expand


I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI
This work examines if non-technical users of XAI fall for an illusion of explanatory depth when interpreting additive local explanations and how their perception of understanding decreases when it is examined. Expand
Expanding Explainability: Towards Social Transparency in AI systems
This work suggested constitutive design elements of ST and developed a conceptual framework to unpack ST’s effect and implications at the technical, decision-making, and organizational level, showing how ST can potentially calibrate trust in AI, improve decision- making, facilitate organizational collective actions, and cultivate holistic explainability. Expand
Anchoring Bias Affects Mental Model Formation and User Reliance in Explainable AI Systems
Explainable Artificial Intelligence approaches are used to bring transparency to machine learning and artificial intelligence models, and hence, improve the decision-making process for their end-users and strong findings are presented that aim to make intelligent system designers aware of such biases when designing such tools. Expand
No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML
How explanations shape users' perceptions of ML models with or without the ability to provide feedback to them is investigated: does revealing model flaws increase users' desire to "fix" them; does providing explanations cause users to believe - wrongly - that models are introspective, and will thus improve over time. Expand
Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach
This paper introduces Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design and develops a holistic understanding of "who" the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems. Expand
Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle
Investigating applied AI projects and reports on a qualitative interview study of individuals working on AI projects at a large technology and consulting company, noting the importance of adopting a sociotechnical lens in designing AI systems and how the “AI lifecycle” can serve as a design metaphor to further the XAI design field. Expand
Too much, too little, or just right? Ways explanations impact end users' mental models
It is suggested that completeness is more important than soundness: increasing completeness via certain information types helped participants' mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations. Expand
Explanation in Artificial Intelligence: Insights from the Social Sciences
This paper argues that the field of explainable artificial intelligence should build on existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics, and draws out some important findings. Expand
Explaining models: an empirical study of how explanations impact fairness judgment
An empirical study with four types of programmatically generated explanations to understand how they impact people's fairness judgments of ML systems shows that certain explanations are considered inherently less fair, while others can enhance people's confidence in the fairness of the algorithm. Expand
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
An algorithm-informed XAI question bank is developed in which user needs for explainability are represented as prototypical questions users might ask about the AI, and used as a study probe to identify gaps between current XAI algorithmic work and practices to create explainable AI products. Expand