Believing in Black Boxes: Machine Learning for Healthcare Does Not Need Explainability to be Evidence-Based.

@article{McCoy2021BelievingIB,
  title={Believing in Black Boxes: Machine Learning for Healthcare Does Not Need Explainability to be Evidence-Based.},
  author={Liam G. McCoy and Connor T.A. Brenna and Stacy Chen and Karina Vold and Sunit Das},
  journal={Journal of clinical epidemiology},
  year={2021}
}

Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review

The INTRPRT guideline is introduced, a design directive for transparent ML systems in medical image analysis, suggesting human-centered design principles, and increases the likelihood that the algorithms afford transparency and enable stakeholders to capitalize on the benefits of transparent ML.

INTRPRT: A Systematic Review of and Guidelines for Designing and Validating Transparent AI in Medical Image Analysis

The INTRPRT guideline is introduced, a systematic design directive for transparent ML systems in medical image analysis that bridges the disconnect between ML system designers and end users and increases the likelihood that the algorithms afford transparency and enable stakeholders to capitalize on the benefits of transparent ML.

Clinical deployment environments: Five pillars of translational machine learning for health

This paper proposes a design pattern called a Clinical Deployment Environment (CDE) intended to answer the same requirements that bio-medicine articulated in establishing the translational medicine domain and envisions a transition from “real-world” data to “ real- world” development.

The Need for a More Human-Centered Approach to Designing and Validating Transparent AI in Medical Image Analysis - Guidelines and Evidence from a Systematic Review

The INTRPRT guideline is introduced, a systematic design directive for transparent ML systems in medical image analysis that suggests human-centered design principles and highly recommends formative user research as the first step of transparent model design to understand user needs and domain requirements.

State-of-the-Art Explainability Methods with Focus on Visual Analytics Showcased by Glioma Classification

A comparison of 11 identified Python libraries that provide an addition to the better known SHAP and LIME libraries for visualizing explainability and interpretability of the output of their AI model is presented.

Medicine 2032: The future of cardiovascular disease prevention with machine learning and digital health technology

Medical Deep Learning - A systematic Meta-Review

Construction of an Assisted Model Based on Natural Language Processing for Automatic Early Diagnosis of Autoimmune Encephalitis

The assisted diagnostic model could effectively increase the early diagnostic sensitivity for AE compared to previous diagnostic criteria, assist physicians in establishing the diagnosis of AE automatically after inputting the HPI and the results of standard paraclinical tests according to their narrative habits for describing symptoms, avoiding misdiagnosis and allowing for prompt initiation of specific treatment.

Automatic Assessment of Speech Intelligibility using Consonant Similarity for Head and Neck Cancer

This paper investigates a method to predict speech intelligibility based on consonant phonetic similarity based on a siamese network to compute similarity scores between healthy and pathological phonemes, and based on the combina-tion of those scores, regresses the intelligibility values.

References

SHOWING 1-10 OF 69 REFERENCES

What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use

This work surveys clinicians from two distinct acute care specialties to characterize when explainability helps to improve clinicians' trust in ML models and identifies the classes of explanations that clinicians identified as most relevant and crucial for effective translation to clinical practice.

Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability.

  • A. London
  • Medicine
    The Hastings Center report
  • 2019
It is argued that opaque decisions are more common in medicine than critics realize and that ceding medical decision-making to black box systems as contravening the profound moral responsibilities of clinicians should be considered.

A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI

A review on interpretabilities suggested by different research works and categorize them is provided, hoping that insight into interpretability will be born with more considerations for medical practices and initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.

Identifying Ethical Considerations for Machine Learning Healthcare Applications

A systematic approach to identifying ML-HCA ethical concerns is outlined, starting with a conceptual model of the pipeline of the conception, development, implementation of ML-HCAs, and the parallel pipeline of evaluation and oversight tasks at each stage.

What do we need to build explainable AI systems for the medical domain?

It is argued that research in explainable-AI would generally help to facilitate the implementation of AI/ML in the medical domain, and specifically help to facilitates transparency and trust.

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

  • C. Rudin
  • Computer Science
    Nat. Mach. Intell.
  • 2019
This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.

Mechanistic understanding in clinical practice: complementing evidence-based medicine with personalized medicine.

It is concluded that clinicians may expect to see their responsibility increasing as they will deal with diverse, but equally compelling, ways of reasoning and deciding about which intervention will qualify as the 'best one' in each individual case.

Explainable Artificial Intelligence for Safe Intraoperative Decision Support.

Surgical XAI is currently working in surgical XAI to use laparoscopic videos to warn surgeons about upcoming bleeding events in the operating room and explain this risk in terms of patient and surgical factors, to reduce operative times and improve outcomes for patients.

The ethics of AI in health care: A mapping review.

Explainable AI for Healthcare: From Black Box to Interpretable Models

This paper reflects on recent investigations about the interpretability and explainability of artificial intelligence methods and discusses their impact on medicine and healthcare.
...