Domain Adaptation for Named Entity Recognition in Online Media with Word Embeddings


Content on the Internet is heterogeneous and arises from various domains like News, Entertainment, Finance and Technology. Understanding such content requires identifying named entities (persons, places and organizations) as one of the key steps. Traditionally Named Entity Recognition (NER) systems have been built using available annotated datasets (like CoNLL, MUC) and demonstrate excellent performance. However, these models fail to generalize onto other domains like Sports and Finance where conventions and language use can differ significantly. Furthermore, several domains do not have large amounts of annotated labeled data for training robust Named Entity Recognition models. A key step towards this challenge is to adapt models learned on domains where large amounts of annotated training data are available to domains with scarce annotated data. In this paper, we propose methods to effectively adapt models learned on one domain onto other domains using distributed word representations. First we analyze the linguistic variation present across domains to identify key linguistic insights that can boost performance across domains. We propose methods to capture domain specific semantics of word usage in addition to global semantics. We then demonstrate how to effectively use such domain specific knowledge to learn NER models that outperform previous baselines in the domain adaptation setting. ∗This work was done when the author was a research intern at Yahoo. ∗© 2016 This is the authors draft of the work. It is posted here for your personal use. Not for redistribution.

10 Figures and Tables

Cite this paper

@article{Kulkarni2016DomainAF, title={Domain Adaptation for Named Entity Recognition in Online Media with Word Embeddings}, author={Vivek Kulkarni and Yashar Mehdad and Troy Chevalier}, journal={CoRR}, year={2016}, volume={abs/1612.00148} }