Semi-supervised sequence tagging with bidirectional language models


Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network archi-tectures for NLP tasks. However, in most cases, the recurrent network that operates on word-level representations to produce context sensitive representations is trained on relatively little labeled data. In this paper, we demonstrate a general semi-supervised approach for adding pre-trained context embeddings from bidi-rectional language models to NLP systems and apply it to sequence labeling tasks. We evaluate our model on two standard datasets for named entity recognition (NER) and chunking, and in both cases achieve state of the art results, surpassing previous systems that use other forms of transfer or joint learning with additional labeled data and task specific gazetteers.

DOI: 10.18653/v1/P17-1161

8 Figures and Tables

Showing 1-10 of 45 references

On the properties of neural machine translation: Encoder-decoder approaches

  • Kyunghyun Cho, Bart Van Merrienboer, Dzmitry Bahdanau, Yoshua Bengio
  • 2014
Highly Influential
4 Excerpts