• Corpus ID: 227239431

Intrinsic analysis for dual word embedding space models

@article{Mayank2020IntrinsicAF,
  title={Intrinsic analysis for dual word embedding space models},
  author={Mohit Mayank},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.00728}
}
Recent word embeddings techniques represent words in a continuous vector space, moving away from the atomic and sparse representations of the past. Each such technique can further create multiple varieties of embeddings based on different settings of hyper-parameters like embedding dimension size, context window size and training method. One additional variety appears when we especially consider the Dual embedding space techniques which generate not one but two-word embeddings as output. This… 

Figures and Tables from this paper

DEAP-FAKED: Knowledge Graph based Approach for Fake News Detection

TLDR
The proposed DEAP-FAKED framework is a combination of the NLP and GNN technique, where the news content is encoded and the Knowledge Graph is encoded, and obtains an F1-score of 88% and 78% for the two datasets, which shows the effectiveness of the approach.

References

SHOWING 1-10 OF 30 REFERENCES

Word Embeddings through Hellinger PCA

TLDR
This work proposes to drastically simplify the word embeddings computation through a Hellinger PCA of the word co- occurence matrix and shows that it can provide an easy way to adaptembeddings to specific tasks.

SWOW-8500: Word Association task for Intrinsic Evaluation of Word Embeddings

TLDR
A novel intrinsic evaluation task employing large word association datasets (particularly the Small World of Words dataset) is proposed, and correlations not just between performances on SWOW-8500 and previously proposed intrinsic tasks of word similarity prediction, but also with downstream tasks (eg. Text Classification and Natural Language Inference).

GloVe: Global Vectors for Word Representation

TLDR
A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.

Querying Word Embeddings for Similarity and Relatedness

TLDR
The usefulness of context embeddings is demonstrated in predicting asymmetric association between words from a recently published dataset of production norms and it is suggested that humans respond with words closer to the cue within the context embedding space (rather than the word embeding space), when asked to generate thematically related words.

Enriching Word Vectors with Subword Information

TLDR
A new approach based on the skipgram model, where each word is represented as a bag of character n-grams, with words being represented as the sum of these representations, which achieves state-of-the-art performance on word similarity and analogy tasks.

Improving Distributional Similarity with Lessons Learned from Word Embeddings

TLDR
It is revealed that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves, and these modifications can be transferred to traditional distributional models, yielding similar gains.

A Dual Embedding Space Model for Document Ranking

TLDR
The proposed Dual Embedding Space Model (DESM) captures evidence on whether a document is about a query term in addition to what is modelled by traditional term-frequency based approaches, and shows that the DESM can re-rank top documents returned by a commercial Web search engine, like Bing, better than a term-matching based signal like TF-IDF.

Linguistic Regularities in Sparse and Explicit Word Representations

TLDR
It is demonstrated that analogy recovery is not restricted to neural word embeddings, and that a similar amount of relational similarities can be recovered from traditional distributional word representations.

Analogy-based detection of morphological and semantic relations with word embeddings: what works and what doesn’t.

TLDR
This study applies the widely used vector offset method to 4 types of linguistic relations: inflectional and derivational morphology, and lexicographic and encyclopedic semantics, and systematically examines how accuracy for different categories is affected by window size and dimensionality of the SVD-based word embeddings.

word2vec Parameter Learning Explained

TLDR
Detailed derivations and explanations of the parameter update equations of the word2vec models, including the original continuous bag-of-word (CBOW) and skip-gram (SG) models, as well as advanced optimization techniques, including hierarchical softmax and negative sampling are provided.