Believing in Black Boxes: Machine Learning for Healthcare Does Not Need Explainability to be Evidence-Based.

@article{McCoy2021BelievingIB,
  title={Believing in Black Boxes: Machine Learning for Healthcare Does Not Need Explainability to be Evidence-Based.},
  author={Liam G. McCoy and Connor T.A. Brenna and Stacy Chen and Karina Vold and Sunit Das},
  journal={Journal of clinical epidemiology},
  year={2021}
}

Why we do need Explainable AI for Healthcare

TLDR
Against its detractors and despite valid concerns, it is argued that the Explainable AI research program is still central to human-machine interaction and ultimately the authors' main tool against loss of control, a danger that cannot be prevented by rigorous clinical validation alone.

INTRPRT: A Systematic Review of and Guidelines for Designing and Validating Transparent AI in Medical Image Analysis

TLDR
The INTRPRT guideline is introduced, a systematic design directive for transparent ML systems in medical image analysis that bridges the disconnect between ML system designers and end users and increases the likelihood that the algorithms afford transparency and enable stakeholders to capitalize on the benefits of transparent ML.

Clinical deployment environments: Five pillars of translational machine learning for health

TLDR
This paper proposes a design pattern called a Clinical Deployment Environment (CDE) intended to answer the same requirements that bio-medicine articulated in establishing the translational medicine domain and envisions a transition from “real-world” data to “ real- world” development.

The need for a more human-centered approach to designing and validating transparent AI in medical image analysis -- Guidelines and Evidence from a Systematic Review

TLDR
The INTRPRT guideline is introduced, a systematic design directive for transparent ML systems in medical image analysis that suggests human-centered design principles and highly recommends formative user research as the first step of transparent model design to understand user needs and domain requirements.

State-of-the-Art Explainability Methods with Focus on Visual Analytics Showcased by Glioma Classification

TLDR
A comparison of 11 identified Python libraries that provide an addition to the better known SHAP and LIME libraries for visualizing explainability and interpretability of the output of their AI model is presented.

Medicine 2032: The future of cardiovascular disease prevention with machine learning and digital health technology

Construction of an Assisted Model Based on Natural Language Processing for Automatic Early Diagnosis of Autoimmune Encephalitis

TLDR
The assisted diagnostic model could effectively increase the early diagnostic sensitivity for AE compared to previous diagnostic criteria, assist physicians in establishing the diagnosis of AE automatically after inputting the HPI and the results of standard paraclinical tests according to their narrative habits for describing symptoms, avoiding misdiagnosis and allowing for prompt initiation of specific treatment.

Artificial Intelligence Analysis of Gene Expression Predicted the Overall Survival of Mantle Cell Lymphoma and a Large Pan-Cancer Series

TLDR
Artificial intelligence analysis predicted the overall survival of MCL with high accuracy, and highlighted genes that predicted the survival of a large pan-cancer series.

References

SHOWING 1-10 OF 69 REFERENCES

What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use

TLDR
This work surveys clinicians from two distinct acute care specialties to characterize when explainability helps to improve clinicians' trust in ML models and identifies the classes of explanations that clinicians identified as most relevant and crucial for effective translation to clinical practice.

Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability.

  • A. London
  • Medicine
    The Hastings Center report
  • 2019
TLDR
It is argued that opaque decisions are more common in medicine than critics realize and that ceding medical decision-making to black box systems as contravening the profound moral responsibilities of clinicians should be considered.

Ethical considerations about artificial intelligence for prognostication in intensive care

TLDR
A pathway to an ethical implementation of AI-based prognostication is proposed that includes a checklist for new AI models that deals with medical and technical topics as well as patient- and system-centered issues.

A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI

TLDR
A review on interpretabilities suggested by different research works and categorize them is provided, hoping that insight into interpretability will be born with more considerations for medical practices and initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.

Identifying Ethical Considerations for Machine Learning Healthcare Applications

TLDR
A systematic approach to identifying ML-HCA ethical concerns is outlined, starting with a conceptual model of the pipeline of the conception, development, implementation of ML-HCAs, and the parallel pipeline of evaluation and oversight tasks at each stage.

What do we need to build explainable AI systems for the medical domain?

TLDR
It is argued that research in explainable-AI would generally help to facilitate the implementation of AI/ML in the medical domain, and specifically help to facilitates transparency and trust.

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

  • C. Rudin
  • Computer Science
    Nat. Mach. Intell.
  • 2019
TLDR
This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.

Mechanistic understanding in clinical practice: complementing evidence-based medicine with personalized medicine.

TLDR
It is concluded that clinicians may expect to see their responsibility increasing as they will deal with diverse, but equally compelling, ways of reasoning and deciding about which intervention will qualify as the 'best one' in each individual case.

Explainable Artificial Intelligence for Safe Intraoperative Decision Support.

TLDR
Surgical XAI is currently working in surgical XAI to use laparoscopic videos to warn surgeons about upcoming bleeding events in the operating room and explain this risk in terms of patient and surgical factors, to reduce operative times and improve outcomes for patients.

The ethics of AI in health care: A mapping review.

...