The false hope of current approaches to explainable artificial intelligence in health care.

  title={The false hope of current approaches to explainable artificial intelligence in health care.},
  author={Marzyeh Ghassemi and Luke Oakden-Rayner and Andrew Beam},
  journal={The Lancet. Digital health},
  volume={3 11},

Figures from this paper

Explainability and artificial intelligence in medicine.

  • Sandeep Reddy
  • Computer Science, Medicine
    The Lancet. Digital health
  • 2022

Putting explainable AI in context: institutional explanations for medical AI

It is argued these systems do require an explanation, but an institutional explanation is necessary for either post hoc explanations or accuracy scores to be epistemically meaningful to the medical professional, making it possible for them to rely on these systems as effective and useful tools in their practices.

Expectations for Artificial Intelligence (AI) in Psychiatry

The complex reasons for the low technology maturity are described to describe and set realistic expectations for the safe, routine use of AI in clinical medicine to augment medical decision making in psychiatry.

The Role of Explainability in Assuring Safety of Machine Learning in Healthcare

It is concluded that XAI methods have a valuable role in safety assurance of ML-based systems in healthcare but that they are not sufficient in themselves to assure safety.

eXplainable Artificial Intelligence (XAI) and Associated Potential Limitations in the Healthcare Sector

The key idea is that current XAI libraries are not suitable to fully explain and justify medical diagnosis on the individual case, demonstrated via the example of pneumonia detection through a CCN trained on x-ray images.

Assessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-making

This paper contributes with a pragmatic evaluation framework for explainable Machine Learning (ML) models for clinical decision support. The study revealed a more nuanced role for ML explanation

Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care

This article examines existing models used in neurocritical care from the perspective of interpretability and the use of interpretable machine learning will be explored, in particular the potential benefits and drawbacks that the techniques may have when applied to neuro critical care data.

Clinician's guide to trustworthy and responsible artificial intelligence in cardiovascular imaging

The main risks of AI applications and potential mitigation techniques for the wider application of these promising techniques in the context of cardiovascular imaging are described.

How to Explain and Justify Almost Any Decision: Potential Pitfalls for Accountability in AI Decision-Making

This paper demonstrates how a black-box explanation system developed to be used with ablack-box decision system could aim to manipulate decision recipients or auditors into failing to recognize an intentionally discriminatory decision model.



Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

  • C. Rudin
  • Computer Science
    Nat. Mach. Intell.
  • 2019
This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.

Deep Learning and Explainable AI in Healthcare Using EHR

This chapter contains the design and implementation of an Explainable Deep Learning System for Healthcare using EHR, using an attention mechanism and Recurrent Neural Network on EHR data, for predicting heart failure of patients and providing insight into the key diagnoses that have led to the prediction.

What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use

This work surveys clinicians from two distinct acute care specialties to characterize when explainability helps to improve clinicians' trust in ML models and identifies the classes of explanations that clinicians identified as most relevant and crucial for effective translation to clinical practice.

On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities.

Insight is provided into the current state of the art of interpretability methods for radiology AI and radiologists' opinions on the topic and suggests trends and challenges that need to be addressed to effectively streamlineinterpretability methods in clinical practice.

Explaining Explanations: An Overview of Interpretability of Machine Learning

There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide

Translating Artificial Intelligence Into Clinical Care.

Findings from a study evaluating the use of deep learning for detection of diabetic retinopathy and macular edema are presented, giving the authors confidence that this algorithm could be of clinical utility.

Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods

It is demonstrated how extremely biased (racist) classifiers crafted by the proposed framework can easily fool popular explanation techniques such as LIME and SHAP into generating innocuous explanations which do not reflect the underlying biases.

Artificial intelligence in healthcare

Recent breakthroughs in AI technologies and their biomedical applications are outlined, the challenges for further progress in medical AI systems are identified, and the economic, legal and social implications of AI in healthcare are summarized.

High-performance medicine: the convergence of human and artificial intelligence

  • E. Topol
  • Medicine, Computer Science
    Nature Medicine
  • 2019
Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient–doctor relationship or facilitate its erosion remains to be seen.

Manipulating and Measuring Model Interpretability

A sequence of pre-registered experiments showed participants functionally identical models that varied only in two factors commonly thought to make machine learning models more or less interpretable: the number of features and the transparency of the model (i.e., whether the model internals are clear or black box).