Context-Specific and Multi-Prototype Character Representations

@inproceedings{Zheng2016ContextSpecificAM,
  title={Context-Specific and Multi-Prototype Character Representations},
  author={Xiaoqing Zheng and Jiangtao Feng and Mengxiao Lin and Wenqiang Zhang},
  booktitle={IJCAI},
  year={2016}
}
Unsupervised word representations have demonstrated improvements in predictive generalization on various NLP tasks. Much effort has been devoted to effectively learning word embeddings, but little attention has been given to distributed character representations, although such character-level representations could be very useful for a variety of NLP applications in intrinsically "character-based" languages (e.g. Chinese and Japanese). On the other hand, most of existing models create a single… CONTINUE READING

Citations

Publications citing this paper.

References

Publications referenced by this paper.
SHOWING 1-10 OF 29 REFERENCES

CoRR

Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean. Efficient estimation of word representations space
  • abs/1301.3781,
  • 2013
VIEW 11 EXCERPTS
HIGHLY INFLUENTIAL

Natural Language Processing (almost) from Scratch

  • J. Mach. Learn. Res.
  • 2011
VIEW 9 EXCERPTS
HIGHLY INFLUENTIAL

Efficient nonparametric estimation of multiple embeddings per word in vector space

Neelakantan et al, 2014 Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, Andrew McCallum
  • In Proceedings of the International Conference on Empirical Methods in Natural Language Processing (EMNLP’14),
  • 2014