• Publications
  • Influence
Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda
TLDR
This work investigates how HCI researchers can help to develop accountable systems by performing a literature analysis of 289 core papers on explanations and explaina-ble systems, as well as 12,412 citing papers. Expand
Designing Theory-Driven User-Centric Explainable AI
TLDR
This paper proposes a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across philosophy and psychology, and identifies pathways along which human cognitive patterns drives needs for building XAI and how XAI can mitigate common cognitive biases. Expand
Why these Explanations? Selecting Intelligibility Types for Explanation Goals
TLDR
A recently developed conceptual framework for user-centric reasoned XAI that draws from foundational concepts in philosophy, cognitive psychology, and AI is leveraged to identify pathways for how user reasoning drives XAI needs. Expand
Interpreting Intelligibility under Uncertain Data Imputation
TLDR
This work investigates the impact of missing data and imputation on how users would understand, and use explanation features and proposes two approaches to provide explanation interfaces for explaining feature attribution with uncertainty due to missing data imputation. Expand
The Curious Case of Providing Intelligibility for Smart Speakers
AI techniques are increasingly incorporated into everyday devices and appliances. Explainable AI (XAI) is an approach to improve algorithmic transparency, in which systems explain how they arrive atExpand
Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations
TLDR
Two approaches to help users to manage their perception of uncertainty in a model explanation are proposed and study: transparently show uncertainty in feature attributions to allow users to reflect on, and suppress attribution to features with uncertain measurements and shift attribution to other features by regularizing with an uncertainty penalty. Expand