Corpus ID: 18453589

Learning Distributed Representations of Phrases

@inproceedings{Lopyrev2014LearningDR,
  title={Learning Distributed Representations of Phrases},
  author={Konstantin Lopyrev},
  year={2014}
}
Recent work in Natural Language Processing has focused on learning distributed representations of words, phrases, sentences, paragraphs and even whole documents. In such representations, text is represented using multi-dimensional vectors and similarity between pieces of text can be measured using similarity between such vectors. In this project I focus my attention on learning representations of phrases - sequences of two or more words that can function as a single unit in a sentence. 
Russian Named Entities Recognition and Classification Using Distributed Word and Phrase Representations
TLDR
It is shown that a word or an expression’s context vector is an efficient feature to be used for predicting the type of a named entity and that equivalent variants of anamed entity can be extracted. Expand
Learning to Rank for Coordination Detection
TLDR
A novel model based on the long short-term memory network is developed to fully exploit the differences between training data by formulating the detection of coordinations as a ranking problem to remedy the problem of data imbalance. Expand

References

SHOWING 1-5 OF 5 REFERENCES
Efficient Estimation of Word Representations in Vector Space
TLDR
Two novel model architectures for computing continuous vector representations of words from very large data sets are proposed and it is shown that these vectors provide state-of-the-art performance on the authors' test set for measuring syntactic and semantic word similarities. Expand
Distributed Representations of Words and Phrases and their Compositionality
TLDR
This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling. Expand
GloVe: Global Vectors for Word Representation
TLDR
A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure. Expand
Better Word Representations with Recursive Neural Networks for Morphology
TLDR
This paper combines recursive neural networks, where each morpheme is a basic unit, with neural language models to consider contextual information in learning morphologicallyaware word representations and proposes a novel model capable of building representations for morphologically complex words from their morphemes. Expand
Composition in Distributional Models of Semantics
TLDR
This article proposes a framework for representing the meaning of word combinations in vector space in terms of additive and multiplicative functions, and introduces a wide range of composition models that are evaluated empirically on a phrase similarity task. Expand