Corpus ID: 219530865

ValNorm: A New Word Embedding Intrinsic Evaluation Method Reveals Valence Biases are Consistent Across Languages and Over Decades

@article{Toney2020ValNormAN,
  title={ValNorm: A New Word Embedding Intrinsic Evaluation Method Reveals Valence Biases are Consistent Across Languages and Over Decades},
  author={Autumn Toney and Aylin Caliskan},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.03950}
}
Word embeddings learn implicit biases from linguistic regularities captured by word co-occurrence information. As a result, statistical methods can detect and quantify social biases as well as widely shared associations imbibed by the corpus the word embeddings are trained on. By extending methods that quantify human-like biases in word embeddings, we introduce ValNorm, a new word embedding intrinsic evaluation task, and the first unsupervised method that estimates the affective meaning of… Expand
2 Citations

References

SHOWING 1-10 OF 45 REFERENCES
Word embeddings quantify 100 years of gender and ethnic stereotypes
Inferring Affective Meanings of Words from Word Embedding
Evaluating word embedding models: methods and experimental results
Enriching Word Vectors with Subword Information
CogniVal: A Framework for Cognitive Word Embedding Evaluation
ANGST: Affective norms for German sentiment terms, derived from the affective norms for English words
Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change
Problems With Evaluation of Word Embeddings Using Word Similarity Tasks
subs2vec: Word embeddings from subtitles in 55 languages
...
1
2
3
4
5
...