• Corpus ID: 9098191

Category Enhanced Word Embedding

@article{Zhou2015CategoryEW,
  title={Category Enhanced Word Embedding},
  author={Chunting Zhou and Chonglin Sun and Zhiyuan Liu and F. Lau},
  journal={ArXiv},
  year={2015},
  volume={abs/1511.08629}
}
Distributed word representations have been demonstrated to be effective in capturing semantic and syntactic regularities. Unsupervised representation learning from large unlabeled corpora can learn similar representations for those words that present similar co-occurrence statistics. Besides local occurrence statistics, global topical information is also important knowledge that may help discriminate a word from another. In this paper, we incorporate category information of documents in the… 

Figures and Tables from this paper

Learning Word Representations for Sentiment Analysis

TLDR
It is proved that incorporating prior sentiment knowledge into the embedding process has the potential to learn better representations for sentiment analysis.

Sentiment-Aware Word Embedding for Emotion Classification

TLDR
This work proposes sentiment-aware word embedding for emotional classification, which consists of integrating sentiment evidence within the emotional embedding component of a term vector.

Word Embedding for Understanding Natural Language: A Survey

TLDR
This survey introduces the motivation and background of word embedding, the methods of text representation as preliminaries, as well as some existingword embedding approaches such as Neural Network Language Model and Sparse Coding Approach, along with their evaluation metrics.

Word Embeddings, Analogies, and Machine Learning: Beyond king - man + woman = queen

TLDR
It is shown that simple averaging over multiple word pairs improves over the state-of-the-art, and a further improvement in accuracy is achieved by combining cosine similarity with an estimation of the extent to which a candidate answer belongs to the correct word class.

DPWord2Vec: Better Representation of Design Patterns in Semantics

TLDR
This study builds a corpus containing more than 400 thousand documents extracted from design pattern books, Wikipedia, and Stack Overflow, and redefine the concept of context window to associate design patterns with words and proposes DPWord2Vec that embeds design patterns and natural language words into vectors simultaneously.

Aspect-level text sentiment analysis method combining Bi-GRU and AlBERT

TLDR
The accuracy of the proposed ATAE-AlBERT-BiGRU model is significantly improved compared with the comparison model in the aspect level emotion analysis task, and it performs well in reducing the computational load of the model.

Climate Event Detection Algorithm Based on Climate Category Word Embedding

TLDR
This method combines the climate category word embedding and typical factors of climate events as a document representation model to express detailed information of climate documents and detect climate events efficiently and accurately.

References

SHOWING 1-10 OF 29 REFERENCES

Improving Word Representations via Global Context and Multiple Word Prototypes

TLDR
A new neural network architecture is presented which learns word embeddings that better capture the semantics of words by incorporating both local and global document context, and accounts for homonymy and polysemy by learning multiple embedDings per word.

GloVe: Global Vectors for Word Representation

TLDR
A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.

Co-learning of Word Representations and Morpheme Representations

TLDR
This paper introduces the morphological knowledge as both additional input representation and auxiliary supervision to the neural network framework and will produce morpheme representations, which can be further employed to infer the representations of rare or unknown words based on their morphological structure.

Learning Word Vectors for Sentiment Analysis

TLDR
This work presents a model that uses a mix of unsupervised and supervised techniques to learn word vectors capturing semantic term--document information as well as rich sentiment content, and finds it out-performs several previously introduced methods for sentiment classification.

Lexicon Infused Phrase Embeddings for Named Entity Resolution

TLDR
A new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embedDings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER are presented.

Efficient Estimation of Word Representations in Vector Space

TLDR
Two novel model architectures for computing continuous vector representations of words from very large data sets are proposed and it is shown that these vectors provide state-of-the-art performance on the authors' test set for measuring syntactic and semantic word similarities.

Better Word Representations with Recursive Neural Networks for Morphology

TLDR
This paper combines recursive neural networks, where each morpheme is a basic unit, with neural language models to consider contextual information in learning morphologicallyaware word representations and proposes a novel model capable of building representations for morphologically complex words from their morphemes.

Distributed Representations of Sentences and Documents

TLDR
Paragraph Vector is an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents, and its construction gives the algorithm the potential to overcome the weaknesses of bag-of-words models.

Knowledge-Powered Deep Learning for Word Embedding

TLDR
This study explores the capacity of leveraging morphological, syntactic, and semantic knowledge to achieve high-quality word embeddings, and explores these types of knowledge to define new basis for word representation, provide additional input information, and serve as auxiliary supervision in deep learning.

Distributed Representations of Words and Phrases and their Compositionality

TLDR
This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.