• Corpus ID: 233168864

How to Write a Bias Statement: Recommendations for Submissions to the Workshop on Gender Bias in NLP

@article{Hardmeier2021HowTW,
  title={How to Write a Bias Statement: Recommendations for Submissions to the Workshop on Gender Bias in NLP},
  author={Christian Hardmeier and Marta Ruiz Costa-juss{\`a} and Kellie Webster and Will Radford and Su Lin Blodgett},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.03026}
}
The programme committee of the workshops included a number of reviewers with a background in the humanities and social sciences, in addition to NLP experts doing the bulk of the reviewing. Each paper was assigned one of those reviewers, and they were asked to pay specific attention to the provided bias statements in their reviews. This initiative was well received by the authors who submitted papers to the workshop, several of whom said they received useful suggestions and literature hints from… 
Gender Bias in Machine Translation
TLDR
This work critically review current conceptualizations of bias in machine translation technology in light of theoretical insights from related disciplines and point toward potential directions for future work.
How to Split: the Effect of Word Segmentation on Gender Bias in Speech Translation
TLDR
This work considers a model that systematically and disproportionately favours masculine over feminine forms to be biased, as it fails to properly recognize women, and proposes a combined approach that preserves BPE overall translation quality, while leveraging the higher ability of character-based segmentation to properly translate gender.
Experimental Standards for Deep Learning Research: A Natural Language Processing Perspective
TLDR
Starting from fundamental scientific principles, ongoing discussions on experimental standards in DL are distill into a single, widely-applicable methodology to strengthen experimental evidence, improve reproducibility and enable scientific progress.

References

SHOWING 1-3 OF 3 REFERENCES
Language (Technology) is Power: A Critical Survey of “Bias” in NLP
TLDR
A greater recognition of the relationships between language and social hierarchies is urged, encouraging researchers and practitioners to articulate their conceptualizations of “bias” and to center work around the lived experiences of members of communities affected by NLP systems.
Automatically Identifying Gender Issues in Machine Translation using Perturbations
TLDR
A novel technique is developed to mine examples from real world data to explore challenges for deployed systems and expose where model representations are gendered, and the unintended consequences these gendered representations can have in downstream application.
The confidence gap predicts the gender pay gap among STEM graduates
TLDR
Women earn less than men, net of human capital factors like engineering degree and grade point average, and that the influence of gender on starting salaries is associated with self-efficacy, according to a three-wave longitudinal survey of graduates of engineering programs from 2015-2017.