Semantics derived automatically from language corpora contain human-like biases

  title={Semantics derived automatically from language corpora contain human-like biases},
  author={A. Caliskan and J. Bryson and A. Narayanan},
  pages={183 - 186}
  • A. Caliskan, J. Bryson, A. Narayanan
  • Published 2017
  • Computer Science, Medicine
  • Science
  • Machines learn what people know implicitly AlphaGo has demonstrated that a machine can learn how to do things that people spend many years of concentrated study learning, and it can rapidly learn how to do them better than any human can. Caliskan et al. now show that machines can learn word associations from written texts and that these associations mirror those learned by humans, as measured by the Implicit Association Test (IAT) (see the Perspective by Greenwald). Why does this matter… CONTINUE READING
    793 Citations
    Corpus-based Comparison of Distributional Models of Language and Knowledge Graphs
    • 1
    • Highly Influenced
    • PDF
    ValNorm Quantifies Semantics to Reveal Consistent Valence Biases Across Languages and Over Centuries.
    • Highly Influenced
    • PDF
    What are the Biases in My Word Embedding?
    • 36
    • Highly Influenced
    • PDF
    Discovering and Interpreting Conceptual Biases in Online Communities
    • Highly Influenced
    • PDF


    Extracting Semantics from the Enron Corpus
    • 2
    The Distributional Hypothesis
    • 325
    • PDF
    Extracting semantic representations from word co-occurrence statistics: A computational study
    • 609
    • PDF
    A Neural Probabilistic Language Model
    • 4,749
    • PDF
    Distributed Representations of Words and Phrases and their Compositionality
    • 20,826
    • PDF
    Glove: Global Vectors for Word Representation
    • 16,733
    • Highly Influential
    • PDF
    From Frequency to Meaning: Vector Space Models of Semantics
    • 2,499
    • PDF
    Word and Object
    • 6,330