Representing Affect Information in Word Embeddings

@article{Zhang2022RepresentingAI,
  title={Representing Affect Information in Word Embeddings},
  author={Yuhan Zhang and Wenqi Chen and Ruihan Zhang and Xiajie Zhang},
  journal={ArXiv},
  year={2022},
  volume={abs/2209.10583}
}
A growing body of research in natural language processing (NLP) and natural language understanding (NLU) is investigating human-like knowledge learned or encoded in the word embeddings from large language models. This is a step towards understanding what knowledge language models capture that resembles human understanding of language and communication. Here, we investigated whether and how the affect meaning of a word (i.e., valence, arousal, dominance) is encoded in word embeddings pre-trained… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 41 REFERENCES

Contextualized Word Embeddings Encode Aspects of Human-Like Word Sense Knowledge

This work investigates whether recent advances in NLP, specifically contextualized word embeddings, capture human-like distinctions between English word senses, such as polysemy and homonymy, and finds that participants’ judgments of the relatedness between senses are correlated with distances between senses in the BERT embedding space.

Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings

A simple but effective approach to WSD using a nearest neighbor classification on CWEs and it is shown that the pre-trained BERT model is able to place polysemic words into distinct 'sense' regions of the embedding space, while ELMo and Flair NLP do not seem to possess this ability.

Spying on Your Neighbors: Fine-grained Probing of Contextual Embeddings for Information about Surrounding Words

A suite of probing tasks that enable fine-grained testing of contextual embeddings for encoding of information about surrounding words are introduced, and it is found that each of the tested information types is indeed encoded as contextual information across tokens, often with near-perfect recoverability.

How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings

It is found that in all layers of ELMo, BERT, and GPT-2, on average, less than 5% of the variance in a word’s contextualized representations can be explained by a static embedding for that word, providing some justification for the success of contextualization representations.

Sentiment-Aware Word Embedding for Emotion Classification

This work proposes sentiment-aware word embedding for emotional classification, which consists of integrating sentiment evidence within the emotional embedding component of a term vector.

Enriching Word Vectors with Subword Information

A new approach based on the skipgram model, where each word is represented as a bag of character n-grams, with words being represented as the sum of these representations, which achieves state-of-the-art performance on word similarity and analogy tasks.

Emergent linguistic structure in artificial neural networks trained by self-supervision

Methods for identifying linguistic hierarchical structure emergent in artificial neural networks are developed and it is shown that components in these models focus on syntactic grammatical relationships and anaphoric coreference, allowing approximate reconstruction of the sentence tree structures normally assumed by linguists.

Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank

A Sentiment Treebank that includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality, and introduces the Recursive Neural Tensor Network.

Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies

It is concluded that LSTMs can capture a non-trivial amount of grammatical structure given targeted supervision, but stronger architectures may be required to further reduce errors; furthermore, the language modeling signal is insufficient for capturing syntax-sensitive dependencies, and should be supplemented with more direct supervision if such dependencies need to be captured.

Inferring Affective Meanings of Words from Word Embedding

This work proposes a regression-based method to automatically infer multi-dimensional affective representation of words via their word embedding based on a set of seed words that can make use of the rich semantic meanings obtained fromword embedding to extract meanings in some specific semantic space.