A Means-End Account of Explainable Artificial Intelligence

@article{Buchholz2022AMA,
  title={A Means-End Account of Explainable Artificial Intelligence},
  author={Oliver Buchholz},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.04638}
}
Explainable artificial intelligence (XAI) seeks to produce explanations for those machine learning methods which are deemed opaque. However, there is considerable disagreement about what this means and how to achieve it. Authors disagree on what should be explained (topic), to whom something should be explained (stakeholder), how something should be explained (instrument), and why something should be explained (goal). In this paper, I employ insights from means-end epistemology to structure the… 

Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence

The philosophical and social foundations of human explainability are reviewed, and the human-centred explanatory process needed to achieve the desired level of algorithmic transparency and understanding in explainees are revisited, revisiting the much disputed trade-off between transparency and predictive power.

References

SHOWING 1-10 OF 50 REFERENCES

Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts

It is shown that post-hoc explanation algorithms are unsuitable to achieve the transparency objectives inherent to the legal norms, and there is a need to more explicitly discuss the objectives underlying “explainability” obligations as these can often be better achieved through other mechanisms.

Belief and Counterfactuals

This book is the first of two volumes on belief and counterfactuals. It consists of six of a total of eleven chapters. The first volume is concerned primarily with questions in epistemology and is

On the Relation of Trust and Explainability: Why to Engineer for Trustworthiness

It is argued that even though trustworthiness does not automatically lead to trust, there are several reasons to engineer primarily for trustworthiness – and that a system’s explainability can crucially contribute to its trustworthiness.

Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning

The purpose of this paper is to challenge the widespread agreement about the existence and importance of a black box problem and argue that there are ways of being a responsible user of these algorithms that do not require interpretability.

Belief and Counterfactuals . A Study in Means-End Philosophy

Textual Explanations for Self-Driving Vehicles

A new approach to introspective explanations is proposed which uses a visual (spatial) attention model to train a convolutional network end-to-end from images to the vehicle control commands, and two approaches to attention alignment, strong- and weak-alignment are explored.

Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems

A model is described that identifies different roles that agents can fulfill in relation to the machine learning system, by identifying how an agent’s role influences its goals, and the implications for defining interpretability.

Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR

It is suggested data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims, which describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system.

Generating Visual Explanations

A new model is proposed that focuses on the discriminating properties of the visible object, jointly predicts a class label, and explains why the predicted label is appropriate for the image, and generates sentences that realize a global sentence property, such as class specificity.