• Corpus ID: 28681432

Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences

@article{Miller2017ExplainableAB,
  title={Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences},
  author={Tim Miller and Piers D. L. Howe and Liz Sonenberg},
  journal={ArXiv},
  year={2017},
  volume={abs/1712.00547}
}
In his seminal book `The Inmates are Running the Asylum. [] Key Result From a light scan of literature, we demonstrate that there is considerable scope to infuse more results from the social and behavioural sciences into explainable AI, and present some key results from these fields that are relevant to explainable AI.

Tables from this paper

Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory

An application of sensemaking in organizations as a template for discussing design guidelines for sensible AI, AI that factors in the nuances of human cognition when trying to explain itself.

Transparency as design publicity: explaining and justifying inscrutable algorithms

It is argued that transparency of machine learning algorithms, just as explanation, can be defined at different levels of abstraction, and proposed a new form of algorithmic transparency that consists in explaining algorithms as an intentional product that serves a particular goal, or multiple goals, and that provides a measure of the extent to which such a goal is achieved, and evidence about the way that measure has been reached.

Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence

The philosophical and social foundations of human explainability are reviewed, and the human-centred explanatory process needed to achieve the desired level of algorithmic transparency and understanding in explainees are revisited, revisiting the much disputed trade-off between transparency and predictive power.

XAI Handbook: Towards a Unified Framework for Explainable AI

A theoretical framework is proposed that not only provides concrete definitions for these terms, but it also outlines all steps necessary to produce explanations and interpretations and allows for existing contributions to be re-contextualized such that their scope can be measured, thus making them comparable to other methods.

Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant

A prototype voice-enabled device, called Glass-Box, which users can question to understand automated decisions and identify the underlying model's biases and errors, and explains algorithmic predictions with class-contrastive counterfactual statements.

Explaining Explanations in AI

This work contrasts the different schools of thought on what makes an explanation in philosophy and sociology, and suggests that machine learning might benefit from viewing the problem more broadly.

Plan Explanations as Model Reconciliation - An Empirical Study

Evaluating explanation generation algorithms in a series of studies in a mock search and rescue scenario with an internal semi-autonomous robot and an external human commander demonstrates to what extent the properties of these algorithms hold as they are evaluated by humans, and how the dynamics of trust between the human and the robot evolve during the process of these interactions.

Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and Goals

The EUCA user study findings, the identified explanation forms and goals for technical specification, and the EUCA study dataset support the design and evaluation of end-user-centered XAI techniques for accessible, safe, and accountable AI.

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

It is found that participants desire practically useful information that can improve their collaboration with the AI, more so than technical system details, and among existing XAI approaches, participants preferred part-based explanations that resemble human reasoning and explanations.

"If it didn't happen, why would I change my decision?": How Judges Respond to Counterfactual Explanations for the Public Safety Assessment

Many researchers and policymakers have expressed excitement about algorithmic explanations enabling more fair and responsible decision-making. However, recent experimental studies have found that
...

References

SHOWING 1-10 OF 45 REFERENCES

Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy

It is shown how explanation can be seen as a "model reconciliation problem" (MRP), where the AI system in effect suggests changes to the human's model, so as to make its plan be optimal with respect to that changed human model.

The course of events: counterfactuals, causal sequences, and explanation

[Extract] Causal explanations can help us understand why events change course, and why the world turned out differently to what we might have expected. Even in understanding simple narratives of

Explanations in knowledge systems: design for explainable expert systems

The explainable expert systems framework (EES), in which the focus is on capturing those design aspects that are important for producing good explanations, including justifications of the system's

Logic and Conversation

As Grice’s enthusiasm for ordinary language philosophy became increasingly qualified during the 1950s, his interest was growing in the rather different styles of philosophy of language then current

Attribution in conversational context: Effect of mutual knowledge on explanation‐giving

Attribution theorists typically have conceived the attribution process in terms of universal laws of cognitive functioning, independent of social interaction. In this paper we argue for the notion,

Explanation in second generation expert systems

Two major developments that have differentiated explanation in second generation systems from explanation in first generation systems are described: new architectures have been developed that capture more of the knowledge that is needed for explanation, and more powerful explanation generators have beendeveloped in which explanation generation is viewed as a problem-solving activity in its own right.

The simulation heuristic

Our original treatment of the availability heuristic (Tversky & Kahneman, 1973, 11) discussed two classes of mental operations that “bring things to mind”: the retrieval of instances and the

Explainable Agency for Intelligent Autonomous Systems

Before they will be trusted by humans, autonomous agents must be able to explain their decisions and the reasoning that produced their choices, which is referred to as explainable agency.