Word Sense Disambiguation with LSTM: Do We Really Need 100 Billion Words?

@article{Le2017WordSD,
  title={Word Sense Disambiguation with LSTM: Do We Really Need 100 Billion Words?},
  author={Minh Le and Marten Postma and Jacopo Urbani},
  journal={CoRR},
  year={2017},
  volume={abs/1712.03376}
}
Recently, Yuan et al. (2016) have shown the effectiveness of using Long ShortTerm Memory (LSTM) for performing Word Sense Disambiguation (WSD). Their proposed technique outperformed the previous state-of-the-art with several benchmarks, but neither the training data nor the source code was released. This paper presents the results of a reproduction study of this technique using only openly available datasets (GigaWord, SemCore, OMSTI) and software (TensorFlow). From them, it emerged that… CONTINUE READING
Related Discussions
This paper has been referenced on Twitter 30 times. VIEW TWEETS

Citations

Publications citing this paper.

Similar Papers

Loading similar papers…