• Corpus ID: 30921994

Desiderata for Vector-Space Word Representations

@article{Derczynski2016DesiderataFV,
  title={Desiderata for Vector-Space Word Representations},
  author={Leon Derczynski},
  journal={ArXiv},
  year={2016},
  volume={abs/1608.02094}
}
A plethora of vector-space representations for words is currently available, which is growing. These consist of fixed-length vectors containing real values, which represent a word. The result is a representation upon which the power of many conventional information processing and data mining techniques can be brought to bear, as long as the representations are designed with some forethought and fit certain constraints. This paper details desiderata for the design of vector space representations… 

References

SHOWING 1-9 OF 9 REFERENCES
Evaluation of Word Vector Representations by Subspace Alignment
TLDR
QVEC is presented—a computationally inexpensive intrinsic evaluation measure of the quality of word embeddings based on alignment to a matrix of features extracted from manually crafted lexical resources—that obtains strong correlation with performance of the vectors in a battery of downstream semantic evaluation tasks.
Distributed Representations of Words and Phrases and their Compositionality
TLDR
This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Evaluation methods for unsupervised word embeddings
TLDR
A comprehensive study of evaluation methods for unsupervised embedding techniques that obtain meaningful representations of words from text, calling into question the common assumption that there is one single optimal vector representation.
A Spectral Algorithm for Learning Class-Based n-gram Models of Natural Language
TLDR
A new algorithm for clustering under the Brown et al. model, which relies on the use of canonical correlation analysis to derive a low-dimensional representation of words and a bottom-up hierarchical clustering over these representations, which is an order of magnitude more efficient.
Generalised Brown Clustering and Roll-Up Feature Generation
TLDR
A subtle but profound generalisation of Brown clustering is presented to improve the overall quality by decoupling the number of output classes from the computational active set size and permits a novel approach to feature selection from Brown clusters.
David Mimno
  • and Thorsten Joachims.
  • 2015
Guillaume Lample
  • and Chris Dyer.
  • 2015
Greg S Corrado
  • and Jeff Dean.
  • 2013