"Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI

@article{Gilpin2022ExplanationIN,
  title={"Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI},
  author={Leilani H. Gilpin and Andrew R. Paley and Mohammed A. Alam and Sarah Spurlock and Kristian J. Hammond},
  journal={ArXiv},
  year={2022},
  volume={abs/2207.00007}
}
There is broad agreement that Artificial Intelligence (AI) systems, par-ticularly those using Machine Learning (ML), should be able to “explain” their behavior. Unfortunately, there is little agreement as to what constitutes an “explanation.” This has caused a disconnect between the explanations that systems produce in service of explainable Artificial Intelligence (XAI) and those explanations that users and other audi-ences actually need, which should be defined by the full spectrum of functional… 

Figures and Tables from this paper

Fiduciary Responsibility: Facilitating Public Trust in Automated Decision Making

—Automated decision-making systems are being in- creasingly deployed and affect the public in a multitude of positive and negative ways. Governmental and private institutions use these systems to

References

SHOWING 1-10 OF 24 REFERENCES

Explaining Explanations: An Overview of Interpretability of Machine Learning

There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide

“Why Should I Trust You?”: Explaining the Predictions of Any Classifier

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective

This work introduces and study the disagreement problem in explainable machine learning, formalizes the notion of disagreement between explanations, and analyzes how often such disagreements occur in practice, and how do practitioners resolve these disagreements.

Foundations of Explainable Knowledge-Enabled Systems

This work presents a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains.

Explanation in Artificial Intelligence: Insights from the Social Sciences

Machine Learning Explainability for External Stakeholders

A closed-door, day-long workshop between academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability and to understand the current shortcomings of and potential solutions for deploying explainable machine learning in service of transparency goals is conducted.

The false hope of current approaches to explainable artificial intelligence in health care.

The Mindlessness of Ostensibly Thoughtful Action: The Role of "Placebic" Information in Interpersonal Interaction

Three field experiments were conducted to test the hypothesis that complex social behavior that appears to be enacted mindfully instead may be performed without conscious attention to relevant

A Unified Approach to Interpreting Model Predictions

A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

Different "Intelligibility" for Different Folks

This paper provides a typography of 'intelligibility' that distinguishes various notions, and draws methodological conclusions about how autonomous technologies should be designed and deployed in different ways, depending on whose intelligibility is required.