Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability

  title={Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability},
  author={Jesse Josua Benjamin and Christoph Kinkeldey and Claudia M{\"u}ller-Birn and Tim Korjakow and Eva-Maria Herbst},
JESSE JOSUA BENJAMIN, Department of Philosophy, University of Twente, Netherlands and HumanCentered Computing, Freie Universität Berlin, Germany CHRISTOPH KINKELDEY, Human-Centered Computing Freie Universität Berlin, Germany CLAUDIA MÜLLER-BIRN, Human-Centered Computing Freie Universität Berlin, Germany TIM KORJAKOW, Human-Centered Computing Freie Universität Berlin, Germany EVA-MARIA HERBST, Human-Centered Computing Freie Universität Berlin, Germany 

Figures and Tables from this paper


Materializing Interpretability: Exploring Meaning in Algorithmic Systems
This provocation proposes three levels of interpretability that may be analyzed: formality, achievability, and linearity, and suggests that design practice may be needed to move beyond analytic deconstruction and showcase two design projects that exemplify possible strategies. Expand
Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach
This paper introduces Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design and develops a holistic understanding of "who" the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems. Expand
Conceptualizing Care in the Everyday Work Practices of Machine Learning Developers
This provocation investigates machine learning (ML) developers' accounts, which highlight situated, ongoing improvisational work practices that strive to appropriately match datasets, algorithms/modelling techniques, and domain questions, and provides a case to more closely examine relationships which emerge between local innovations and global regimes of scientific formalization and standardization within sociotechnical systems. Expand
Expanding Explainability: Towards Social Transparency in AI systems
This work suggested constitutive design elements of ST and developed a conceptual framework to unpack ST’s effect and implications at the technical, decision-making, and organizational level, showing how ST can potentially calibrate trust in AI, improve decision- making, facilitate organizational collective actions, and cultivate holistic explainability. Expand
An Annotated Portfolio on Doing Postphenomenology Through Research Products
This paper argues for framing the crafting and studying of research products as doing philosophy through things as well as creating an annotated portfolio of such Research through Design (RtD) artifact inquiries as postphenomenological inquiries by tracing commitments across six RtD artifact inquiries. Expand
Making epistemological trouble: Third-paradigm HCI as successor science
It is argued that a successor science for standpoint epistemology has already come into being within the field of HCI, though it is perhaps not recognized as such by its practitioners. Expand
Designing Theory-Driven User-Centric Explainable AI
This paper proposes a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across philosophy and psychology, and identifies pathways along which human cognitive patterns drives needs for building XAI and how XAI can mitigate common cognitive biases. Expand
Who is the "Human" in Human-Centered Machine Learning
This paper studies how scientific papers represent human research subjects in HCML, and shows how these five discourses create paradoxical subject and object representations of the human, which may inadvertently risk dehumanization. Expand
Monsters, Metaphors, and Machine Learning
It is shown how the technology-as-monster metaphor can generatively probe and (re)frame the questions ML poses, and is illustrated through a detailed discussion of an early-stage generative design workshop inquiring into ML approaches to supporting student mental health and well-being. Expand
Human-Centered Artificial Intelligence: Three Fresh Ideas
Human-Centered AI (HCAI) is a promising direction for designing AI systems that support human self-efficacy, promote creativity, clarify responsibility, and facilitate social participation. TheseExpand