• Corpus ID: 224818455

Explaining black-box text classifiers for disease-treatment information extraction

@article{Moradi2020ExplainingBT,
  title={Explaining black-box text classifiers for disease-treatment information extraction},
  author={Milad Moradi and Matthias Samwald},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.10873}
}
Deep neural networks and other intricate Artificial Intelligence (AI) models have reached high levels of accuracy on many biomedical natural language processing tasks. However, their applicability in real-world use cases may be limited due to their vague inner working and decision logic. A post-hoc explanation method can approximate the behavior of a black-box AI model by extracting relationships between feature values and outcomes. In this paper, we introduce a post-hoc explanation method that… 

Figures and Tables from this paper

Explaining Black-Box Models for Biomedical Text Classification

TLDR
Results of evaluations on various biomedical text classification tasks and black-box models demonstrated that BioCIE can outperform perturbation-based and decision set methods in terms of producing concise, accurate, and interpretable explanations.

References

SHOWING 1-10 OF 16 REFERENCES

Post-hoc explanation of black-box classifiers using confident itemsets

Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives

TLDR
The results indicate that CNNs are a valid alternative to existing approaches in patient phenotyping and cohort identification, and should be further investigated, and the deep learning approach presented in this paper can be used to assist clinicians during chart review or support the extraction of billing codes from text by identifying and highlighting relevant phrases for various medical conditions.

An Interpretable Classification Framework for Information Extraction from Online Healthcare Forums

TLDR
An effective and interpretable OHF post classification framework is proposed that classifies sentences into three classes: medication, symptom, and background, and each sentence is projected into an interpretable feature space consisting of labeled sequential patterns, UMLS semantic types, and other heuristic features.

BioBERT: a pre-trained biomedical language representation model for biomedical text mining

TLDR
This article introduces BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora that largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre- trained on biomedical Corpora.

Deep learning in clinical natural language processing: a methodical review

TLDR
Deep learning has not yet fully penetrated clinical NLP and is growing rapidly, but growing acceptance of deep learning as a baseline for NLP research, and of DL-based NLP in the medical community is shown.

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

A Survey of Methods for Explaining Black Box Models

TLDR
A classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system is provided to help the researcher to find the proposals more useful for his own work.

Classifying Semantic Relations in Bioscience Texts

TLDR
This work examines the problem of distinguishing among seven relation types that can occur between the entities "treatment" and "disease" in bioscience text, and finds that the latter help achieve high classification accuracy.