• Corpus ID: 208076245

Modelling EHR timeseries by restricting feature interaction

@article{Zhang2019ModellingET,
  title={Modelling EHR timeseries by restricting feature interaction},
  author={Kun Zhang and Yuan Xue and Gerardo Flores and Alvin Rajkomar and Claire Cui and Andrew M. Dai},
  journal={ArXiv},
  year={2019},
  volume={abs/1911.06410}
}
Time series data are prevalent in electronic health records, mostly in the form of physiological parameters such as vital signs and lab tests. The patterns of these values may be significant indicators of patients' clinical states and there might be patterns that are unknown to clinicians but are highly predictive of some outcomes. Many of these values are also missing which makes it difficult to apply existing methods like decision trees. We propose a recurrent neural network model that… 
Interpretable Additive Recurrent Neural Networks For Multivariate Clinical Time Series
TLDR
The Interpretable-RNN (RNN) that balances model complexity and accuracy by forcing the relationship between variables in the model to be additive is presented, and the experimental results on real-world clinical datasets refute the myth that there is a tradeoff between accuracy and interpretability.
Identification of pediatric respiratory diseases using a fine-grained diagnosis system
TLDR
A pediatric fine-grained diagnosis-assistant system is proposed to provide prompt and precise diagnosis using solely clinical notes upon admission, which would assist clinicians without changing the diagnostic process.
Preparing a Clinical Support Model for Silent Mode in General Internal Medicine
TLDR
It is hoped to eliminate unexpected deaths in the GIM ward, promptly transfer patients who require escalated care to the intensive care unit, and proactively respond to patients who need intensive care.
LIFE: Learning Individual Features for Multivariate Time Series Prediction with Missing Values
TLDR
This paper proposes a Learning Individual Features (LIFE) framework, which provides a new paradigm for MTS prediction withMissing values, and generates reliable features for prediction by using the correlated dimensions as auxiliary information and suppressing the interference from uncorrelated dimensions with missing values.

References

SHOWING 1-10 OF 15 REFERENCES
Scalable and accurate deep learning with electronic health records
TLDR
A representation of patients’ entire raw EHR records based on the Fast Healthcare Interoperability Resources (FHIR) format is proposed, and it is demonstrated that deep learning methods using this representation are capable of accurately predicting multiple medical events from multiple centers without site-specific data harmonization.
Recurrent Neural Networks for Multivariate Time Series with Missing Values
TLDR
Novel deep learning models are developed based on Gated Recurrent Unit, a state-of-the-art recurrent neural network that takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results.
MIMIC-III, a freely accessible critical care database
MIMIC-III (‘Medical Information Mart for Intensive Care’) is a large, single-center database comprising information relating to patients admitted to critical care units at a large tertiary care
Acute Physiology and Chronic Health Evaluation (APACHE) IV: Hospital mortality assessment for today’s critically ill patients*
TLDR
APACHE IV predictions of hospital mortality have good discrimination and calibration and should be useful for benchmarking performance in U.S. ICUs, and aggregate mortality was systematically overestimated as model age increased.
Missing data: our view of the state of the art.
TLDR
2 general approaches that come highly recommended: maximum likelihood (ML) and Bayesian multiple imputation (MI) are presented and may eventually extend the ML and MI methods that currently represent the state of the art.
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Long Short-Term Memory
TLDR
A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Dropout: a simple way to prevent neural networks from overfitting
TLDR
It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
A Theoretically Grounded Application of Dropout in Recurrent Neural Networks
TLDR
This work applies a new variational inference based dropout technique in LSTM and GRU models, which outperforms existing techniques, and to the best of the knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank.
TensorFlow: A system for large-scale machine learning
TLDR
The TensorFlow dataflow model is described and the compelling performance that Tensor Flow achieves for several real-world applications is demonstrated.
...
1
2
...