Corpus ID: 237532503

Interpretable Additive Recurrent Neural Networks For Multivariate Clinical Time Series

  title={Interpretable Additive Recurrent Neural Networks For Multivariate Clinical Time Series},
  author={Asif Rahman and Yale Chang and Jonathan Rubin},
Time series models with recurrent neural networks (RNNs) can have high accuracy but are unfortunately difficult to interpret as a result of feature-interactions, temporal-interactions, and nonlinear transformations. Interpretability is important in domains like healthcare where constructing models that provide insight into the relationships they have learned are required to validate and trust model predictions. We want accurate time series models that are interpretable where users can… 


Modelling EHR timeseries by restricting feature interaction
A recurrent neural network model is proposed that reduces overfitting to noisy observations by limiting interactions between features and results in an improvement on mortality, ICD-9 and AKI prediction from observational values on the Medical Information Mart for Intensive Care III dataset.
Exploring Interpretable LSTM Neural Networks over Multi-Variable Data
The structure of LSTM recurrent neural networks to learn variable-wise hidden states is explored, with the aim to capture different dynamics in multi-variable time series and distinguish the contribution of variables to the prediction.
Leveraging Clinical Time-Series Data for Prediction: A Cautionary Tale
This paper considers two clinical prediction tasks: in-hospital mortality, and hypokalemia, and demonstrates the necessity of evaluating models using an outcome-independent reference point, since choosing the time of prediction relative to the event can result in unrealistic performance.
Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission
This work presents two case studies where high-performance generalized additive models with pairwise interactions (GA2Ms) are applied to real healthcare problems yielding intelligible models with state-of-the-art accuracy.
Interpretability of deep learning models: A survey of results
  • Supriyo Chakraborty, Richard J. Tomsett, +12 authors Prudhvi K. Gurram
  • Computer Science
    2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI)
  • 2017
Some of the dimensions that are useful for model interpretability are outlined, and prior work along those dimensions are categorized, in the process of performing a gap analysis of what needs to be done to improve modelinterpretability.
Explaining an increase in predicted risk for clinical alerts
Methods to lift static attribution techniques to the dynamical setting, where they identify and address challenges specific to dynamics are developed.
Patient specific predictions in the intensive care unit using a Bayesian ensemble
The proposed prediction model performs favourably on both the provided and hidden data sets (set A and set B), and has the potential to be used effectively for patient-specific predictions.
Development and Validation of a Deep Learning Algorithm for Mortality Prediction in Selecting Patients With Dementia for Earlier Palliative Care Interventions
Deep learning appears to show promising results in mortality risk stratification in patients with dementia.
Early Prediction of Sepsis From Clinical Data: The PhysioNet/Computing in Cardiology Challenge 2019
Diverse computational approaches predict the onset of sepsis several hours before clinical recognition, but generalizability to different hospital systems remains a challenge.
n.d.]. Assessment of a Deep Learning Model Based on Electronic Health Record Data to Forecast Clinical Outcomes in Patients With Rheumatoid Arthritis
  • ([n. d.]),
  • 2019