Corpus ID: 215828184

StereoSet: Measuring stereotypical bias in pretrained language models

@article{Nadeem2020StereoSetMS,
  title={StereoSet: Measuring stereotypical bias in pretrained language models},
  author={Moin Nadeem and Anna Bethke and Siva Reddy},
  journal={ArXiv},
  year={2020},
  volume={abs/2004.09456}
}
  • Moin Nadeem, Anna Bethke, Siva Reddy
  • Published 2020
  • Computer Science
  • ArXiv
  • A stereotype is an over-generalized belief about a particular group of people, e.g., Asians are good at math or Asians are bad drivers. Such beliefs (biases) are known to hurt target groups. Since pretrained language models are trained on large real world data, they are known to capture stereotypical biases. In order to assess the adverse effects of these models, it is important to quantify the bias captured in them. Existing literature on quantifying bias evaluates pretrained language models… CONTINUE READING

    Tables and Topics from this paper.

    Explore key concepts

    Links to highly relevant papers for key concepts in this paper:

    Citations

    Publications citing this paper.

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 26 REFERENCES