Determining Gains Acquired from Word Embedding Quantitatively Using Discrete Distribution Clustering

Abstract

Word embeddings have become widelyused in document analysis. While a large number of models for mapping words to vector spaces have been developed, it remains undetermined how much net gain can be achieved over traditional approaches based on bag-of-words. In this paper, we propose a new document clustering approach by combining any word embedding with a state-of-the-art algorithm for clustering empirical distributions. By using the Wasserstein distance between distributions, the word-to-word semantic relationship is taken into account in a principled way. The new clustering method is easy to use and consistently outperforms other methods on a variety of data sets. More importantly, the method provides an effective framework for determining when and how much word embeddings contribute to document analysis. Experimental results with multiple embedding models are reported.

DOI: 10.18653/v1/P17-1169

7 Figures and Tables

Cite this paper

@inproceedings{Ye2017DeterminingGA, title={Determining Gains Acquired from Word Embedding Quantitatively Using Discrete Distribution Clustering}, author={Jianbo Ye and Yanran Li and Zhaohui Wu and James Zijun Wang and Wenjie Li and Jia Li}, booktitle={ACL}, year={2017} }