A Transparent Framework for Evaluating Unintended Demographic Bias in Word Embeddings

@inproceedings{Sweeney2019ATF,
  title={A Transparent Framework for Evaluating Unintended Demographic Bias in Word Embeddings},
  author={Chris Sweeney and Maryam Najafian},
  booktitle={ACL},
  year={2019}
}
Word embedding models have gained a lot of traction in the Natural Language Processing community, however, they suffer from unintended demographic biases. Most approaches to evaluate these biases rely on vector space based metrics like the Word Embedding Association Test (WEAT). While these approaches offer great geometric insights into unintended biases in the embedding vector space, they fail to offer an interpretable meaning for how the embeddings could cause discrimination in downstream NLP… CONTINUE READING

References

Publications referenced by this paper.
SHOWING 1-10 OF 23 REFERENCES

Text embedding models contain bias

Ben Packer, Yoni Halpern, Mario Guajardo-Cspedes, Margaret Mitchell.
  • here’s why that matters. Google Developers.
  • 2018
VIEW 1 EXCERPT