• Publications
  • Influence
An Effective Transition-based Model for Discontinuous NER
TLDR
This work proposes a simple, effective transition-based model with generic neural encoding for discontinuous NER that can effectively recognize discontinuous mentions without sacrificing the accuracy on continuous mentions.
NNE: A Dataset for Nested Named Entity Recognition in English Newswire
TLDR
This work describes NNE—a fine-grained, nested named entity dataset over the full Wall Street Journal portion of the Penn Treebank, which comprises 279,795 mentions of 114 entity types with up to 6 layers of nesting.
An Analysis of Simple Data Augmentation for Named Entity Recognition
TLDR
It is shown that simple augmentation can boost performance for both recurrent and transformer-based models, especially for small training sets, through experiments on two data sets from the biomedical and materials science domains.
Using Similarity Measures to Select Pretraining Data for NER
TLDR
Three cost-effective measures to quantify different aspects of similarity between source pretraining and target task data are proposed and demonstrate that these measures are good predictors of the usefulness of pretrained models for Named Entity Recognition (NER) over 30 data pairs.
Automatic Diagnosis Coding of Radiology Reports: A Comparison of Deep Learning and Conventional Classification Methods
TLDR
This work investigates the applicability of deep learning at autocoding of radiology reports using International Classification of Diseases (ICD), and identifies optimal parameters that could be used in setting up a convolutional neural network for autocode with comparable results to that of conventional methods.
Recognizing Complex Entity Mentions: A Review and Future Directions
  • Xiang Dai
  • Computer Science, Philosophy
    ACL
  • 1 July 2018
TLDR
This work reviews the existing methods which are revised to tackle complex entity mentions and categorize them as tokenlevel and sentence-level approaches, and identifies the research gap, and discusses some directions that are exploring.
Cost-effective Selection of Pretraining Data: A Case Study of Pretraining BERT on Social Media
TLDR
This work pretrain two models on tweets and forum text respectively, and empirically demonstrate the effectiveness of these two resources, and investigates how similarity measures can be used to nominate in-domain pretraining data.
Shot Or Not: Comparison of NLP Approaches for Vaccination Behaviour Detection
TLDR
This work presents an ensemble of statistical classifiers with task-specific features derived using lexicons, language processing tools and word embeddings and a LSTM classifier with pre-trained language models for vaccination behaviour detection shared task.
Medication and Adverse Event Extraction from Noisy Text
TLDR
The problem of extracting mentions of medications and adverse drug events using sequence labelling and nonsequence labelling methods is investigated and can guide studies to choose different methods based on the complexity of the named entities involved, in particular in text mining for pharmacovigilance.
Extracting Family History Information From Electronic Health Records: Natural Language Processing Analysis
TLDR
The approach, which leverages a state-of-the-art named entity recognition model for disease mention detection coupled with a hybrid method for FM mention detection, achieved an effectiveness that was close to that of the top 3 systems participating in the 2019 N2C2 FH extraction challenge, with only the top system convincingly outperforming the approach in terms of precision.
...
1
2
...