Semantics derived automatically from language corpora contain human-like biases

@article{Caliskan2017SemanticsDA,
  title={Semantics derived automatically from language corpora contain human-like biases},
  author={A. Caliskan and J. Bryson and A. Narayanan},
  journal={Science},
  year={2017},
  volume={356},
  pages={183 - 186}
}
  • A. Caliskan, J. Bryson, A. Narayanan
  • Published 2017
  • Computer Science, Medicine
  • Science
  • Machines learn what people know implicitly AlphaGo has demonstrated that a machine can learn how to do things that people spend many years of concentrated study learning, and it can rapidly learn how to do them better than any human can. Caliskan et al. now show that machines can learn word associations from written texts and that these associations mirror those learned by humans, as measured by the Implicit Association Test (IAT) (see the Perspective by Greenwald). Why does this matter… CONTINUE READING
    697 Citations
    Corpus-based Comparison of Distributional Models of Language and Knowledge Graphs
    • 1
    • Highly Influenced
    • PDF
    What are the Biases in My Word Embedding?
    • 30
    • Highly Influenced
    • PDF
    Discovering and Interpreting Conceptual Biases in Online Communities
    Contextual Word Representations: A Contextual Introduction
    • 11
    • PDF

    References

    SHOWING 1-10 OF 77 REFERENCES
    Extracting Semantics from the Enron Corpus
    • 2
    The Distributional Hypothesis
    • 309
    • PDF
    Extracting semantic representations from word co-occurrence statistics: A computational study
    • 594
    • PDF
    A Neural Probabilistic Language Model
    • 4,005
    • PDF
    Distributed Representations of Words and Phrases and their Compositionality
    • 19,590
    • PDF
    Glove: Global Vectors for Word Representation
    • 15,435
    • Highly Influential
    • PDF
    From Frequency to Meaning: Vector Space Models of Semantics
    • 2,439
    • PDF