The false hope of current approaches to explainable artificial intelligence in health care.

@article{Ghassemi2021TheFH,
  title={The false hope of current approaches to explainable artificial intelligence in health care.},
  author={Marzyeh Ghassemi and Luke Oakden-Rayner and Andrew Beam},
  journal={The Lancet. Digital health},
  year={2021},
  volume={3 11},
  pages={
          e745-e750
        }
}

Why we do need Explainable AI for Healthcare

Against its detractors and despite valid concerns, it is argued that the Explainable AI research program is still central to human-machine interaction and ultimately the authors' main tool against loss of control, a danger that cannot be prevented by rigorous clinical validation alone.

Putting explainable AI in context: institutional explanations for medical AI

It is argued these systems do require an explanation, but an institutional explanation is necessary for either post hoc explanations or accuracy scores to be epistemically meaningful to the medical professional, making it possible for them to rely on these systems as effective and useful tools in their practices.

The Role of Explainability in Assuring Safety of Machine Learning in Healthcare

It is concluded that XAI methods have a valuable role in safety assurance of ML-based systems in healthcare but that they are not sufficient in themselves to assure safety.

eXplainable Artificial Intelligence (XAI) and Associated Potential Limitations in the Healthcare Sector

The key idea is that current XAI libraries are not suitable to fully explain and justify medical diagnosis on the individual case, demonstrated via the example of pneumonia detection through a CCN trained on x-ray images.

Assessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-making

This paper contributes with a pragmatic evaluation framework for explainable Machine Learning (ML) models for clinical decision support. The study revealed a more nuanced role for ML explanation

Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care

This article examines existing models used in neurocritical care from the perspective of interpretability and the use of interpretable machine learning will be explored, in particular the potential benefits and drawbacks that the techniques may have when applied to neuro critical care data.

Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI

Through consultation and consensus with a range of stakeholders, a guideline comprising key items that should be reported in early stage clinical studies of AI-based decision support systems in healthcare is developed, facilitating the appraisal of these studies and replicability of their findings.

Algorithm Fairness in AI for Medicine and Healthcare

The intersectional of fairness in machine learning through the context of current issues in healthcare is summarized, and how algorithmic biases arise in current clinical workflows and their resulting healthcare disparities are outlined.

Transparency of AI in Healthcare as a Multilayered System of Accountabilities: Between Legal Requirements and Technical Limitations

This paper suggests an interdisciplinary vision on how to tackle the issue of AI's transparency in healthcare, and proposes a single point of reference for both legal scholars and data scientists on transparency and related concepts.
...

References

SHOWING 1-10 OF 45 REFERENCES

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

  • C. Rudin
  • Computer Science
    Nat. Mach. Intell.
  • 2019
This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.

Deep Learning and Explainable AI in Healthcare Using EHR

This chapter contains the design and implementation of an Explainable Deep Learning System for Healthcare using EHR, using an attention mechanism and Recurrent Neural Network on EHR data, for predicting heart failure of patients and providing insight into the key diagnoses that have led to the prediction.

What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use

This work surveys clinicians from two distinct acute care specialties to characterize when explainability helps to improve clinicians' trust in ML models and identifies the classes of explanations that clinicians identified as most relevant and crucial for effective translation to clinical practice.

On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities.

Insight is provided into the current state of the art of interpretability methods for radiology AI and radiologists' opinions on the topic and suggests trends and challenges that need to be addressed to effectively streamlineinterpretability methods in clinical practice.

Explaining Explanations: An Overview of Interpretability of Machine Learning

There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide

Translating Artificial Intelligence Into Clinical Care.

Findings from a study evaluating the use of deep learning for detection of diabetic retinopathy and macular edema are presented, giving the authors confidence that this algorithm could be of clinical utility.

Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods

It is demonstrated how extremely biased (racist) classifiers crafted by the proposed framework can easily fool popular explanation techniques such as LIME and SHAP into generating innocuous explanations which do not reflect the underlying biases.

Artificial intelligence in healthcare

Recent breakthroughs in AI technologies and their biomedical applications are outlined, the challenges for further progress in medical AI systems are identified, and the economic, legal and social implications of AI in healthcare are summarized.

High-performance medicine: the convergence of human and artificial intelligence

  • E. Topol
  • Medicine, Computer Science
    Nature Medicine
  • 2019
Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient–doctor relationship or facilitate its erosion remains to be seen.

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.