Interpretable Machine Learning for COVID-19: An Empirical Study on Severity Prediction Task

  title={Interpretable Machine Learning for COVID-19: An Empirical Study on Severity Prediction Task},
  author={Han Wu and Wenjie Ruan and Jiangtao Wang and Dingchang Zheng and Shaolin Li and Jian Chen and Kunwei Li and Xiangfei Chai and Abdelsalam Helal},
Black-box nature hinders the deployment of many high-accuracy models in medical diagnosis. It is risky to put one's life in the hands of models that medical researchers do not trust. However, to understand the mechanism of a new virus, such as COVID-19, machine learning models may catch important symptoms that medical practitioners do not notice due to the surge of infected patients during a pandemic. In this work, the interpretation of machine learning models reveals that a high C-reactive… 
A Comprehensive Survey of COVID-19 Detection Using Medical Images
This survey explored and analyzed data sets, preprocessing techniques, segmentation methods, feature extraction, classification, and experimental results which can be helpful for finding future research directions in the domain of automatic diagnosis of COVID-19 disease using AI-based frameworks.
Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications
This tutorial aims to provide a comprehensive overall picture about this emerging direction and enable the community to be aware of the urgency and importance of designing robust deep learning models in safety-critical data analytical applications, ultimately enabling the end-users to trust deep learning classifiers.
COVID-HEART: Development and Validation of a Multi-Variable Model for Real-Time Prediction of Cardiovascular Complications in Hospitalized Patients with COVID-19
The CO VID-HEART predictor is developed and validated, a novel continuously-updating risk prediction technology to forecast CV complications in hospitalized patients with COVID-19, and is anticipated to provide tangible clinical decision support in triaging patients and optimizing resource utilization.
Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review
  • F. Giuste, Wenqi Shi, +6 authors May D. Wang
  • Computer Science
  • 2021
It is found that successful use of XAI can improve model performance, instill trust in the end-user, and provide the value needed to affect user decision-making to overcome barriers to real-world success.
Simple hemogram to support the decision-making of COVID-19 diagnosis using clusters analysis with self-organizing maps neural network
A non-supervised clustering analysis with neural network self-organizing maps (SOM) as a strategy of decision-making to identify potential variables in routine blood tests that can support clinician decision- making during COVID-19 diagnosis at hospital admission, facilitating rapid medical intervention.


An interpretable mortality prediction model for COVID-19 patients
Overall, this Article suggests a simple and operable decision rule to quickly predict patients at the highest risk, allowing them to be prioritized and potentially reducing the mortality rate.
Generating Contrastive Explanations with Monotonic Attribute Functions
This paper proposes a method that can generate contrastive explanations for deep neural networks where aspects that are in themselves sufficient to justify the classification by the deep model are highlighted, but also new aspects which if added will change the classification.
A Unified Approach to Interpreting Model Predictions
A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.
Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation
Individual conditional expectation (ICE) plots are presented, a tool for visualizing the model estimated by any supervised learning algorithm and highlight the variation in the fitted values across the range of a covariate, suggesting where and to what extent heterogeneities might exist.
All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously
Model class reliance (MCR) is proposed as the range of VI values across all well-performing model in a prespecified class, which gives a more comprehensive description of importance by accounting for the fact that many prediction models, possibly of different parametric forms, may fit the data well.
Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models
  • D. Apley
  • Computer Science, Mathematics
  • 2016
Accumulated local effects (ALE) plots are presented, which inherits the desirable characteristics of PD and M plots, without inheriting their preceding shortcomings and are far less computationally expensive than PD plots.
Greedy function approximation: A gradient boosting machine.
Function estimation/approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions
A Deep Learning Interpretable Model for Novel Coronavirus Disease (COVID-19) Screening with Chest CT Images
The proposed convolutional neural network-based model, a ResNet-50 based model, could provide a promising computerized toolkit to help radiologists and serve as a second eye for them to classify COVID-19 in CT scan screening examination.
A Review of Coronavirus Disease-2019 (COVID-19)
  • T. Singhal
  • Medicine
    The Indian Journal of Pediatrics
  • 2020
The disease is mild in most people; in some (usually the elderly and those with comorbidities), it may progress to pneumonia, acute respiratory distress syndrome (ARDS) and multi organ dysfunction and many people are asymptomatic.