Learning Term Embeddings for Hypernymy Identification

@inproceedings{Yu2015LearningTE,
  title={Learning Term Embeddings for Hypernymy Identification},
  author={Zheng Yu and Haixun Wang and Xuemin Lin and Min Wang},
  booktitle={IJCAI},
  year={2015}
}
Hypernymy identification aims at detecting if isA relationship holds between two words or phrases. Most previous methods are based on lexical patterns or the Distributional Inclusion Hypothesis, and the accuracy of such methods is not ideal. In this paper, we propose a simple yet effective supervision framework to identify hypernymy relations using distributed term representations (a.k.a term embeddings). First, we design a distance-margin neural network to learn term embeddings based on some… CONTINUE READING
Highly Cited
This paper has 34 citations. REVIEW CITATIONS

Citations

Publications citing this paper.
Showing 1-10 of 27 extracted citations

References

Publications referenced by this paper.
Showing 1-10 of 37 references

In GEMS

  • Marco Baroni, Alessandro Lenci. How we blessed distributional semantic evaluation
  • pages 1–10,
  • 2011
Highly Influential
11 Excerpts

In ACL

  • Maayan ZhitomirskyGeffet, Ido Dagan. The distributional inclusion hypotheses, lexical entailment
  • pages 107–114,
  • 2005
Highly Influential
4 Excerpts

In EACL

  • Marco Baroni, Raffaella Bernardi, NgocQuynh Do, Chung-chieh Shan. Entailment above the word level in distribut semantics
  • pages 23–32,
  • 2012
Highly Influential
7 Excerpts

CoRR

  • Yoon Kim. Convolutional neural networks for sentence classification
  • abs/1408.5882,
  • 2014

In NIPS

  • Tomas Mikolov, Ilya Sutskever, +4 authors their compositionality
  • pages 3111–3119,
  • 2013

Technical report

  • Ciprian Chelba, Tomas Mikolov, +4 authors Tony Robinson. One billion word benchmark for measuring modeling
  • Google,
  • 2013
2 Excerpts

Similar Papers

Loading similar papers…