• Corpus ID: 236950797

Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models

@inproceedings{Kirk2021BiasOA,
  title={Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models},
  author={Hannah Rose Kirk and Yennie Jun and Haider Iqbal and Elias Benussi and Filippo Volpin and Fr{\'e}d{\'e}ric A. Dreyer and Aleksandar Shtedritski and Yuki M. Asano},
  booktitle={NeurIPS},
  year={2021}
}
The capabilities of natural language models trained on large-scale data have increased immensely over the past few years. Open source libraries such as HuggingFace have made these models easily available and accessible. While prior research has identified biases in large language models, this paper considers biases contained in the most popular versions of these models when applied ‘out-of-the-box’ for downstream tasks. We focus on generative language models as they are well-suited for… 

Towards an Enhanced Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias

TLDR
The attempt to draw a comprehensive view of bias in pre-trained language models, and especially the exploration of affective bias will be highly beneficial to researchers interested in this evolving field.

Looking for a Handsome Carpenter! Debiasing GPT-3 Job Advertisements

TLDR
It is found that prompt-engineering with diversity-encouraging prompts gives no significant improvement to bias, nor realism, and fine-tuning, especially on unbiased real advertisements, can improve realism and reduce bias.

Textinator: an Internationalized Tool for Annotation and Human Evaluation in Natural Language Processing and Generation

TLDR
An internationalized annotation and human evaluation bundle, called Textinator, is released along with documentation and video tutorials, and a thorough systematic comparison of Textinator to previously published annotation tools along 9 different axes is presented.

Extracting Age-Related Stereotypes from Social Media Texts

Age-related stereotypes are pervasive in our society, and yet have been under-studied in the NLP community. Here, we present a method for extracting age-related stereotypes from English language

Individual Fairness Guarantees for Neural Networks

TLDR
A method to overapproximate the resulting optimisation problem using piecewise-linear functions to lower and upper bound the NN's non-linearities globally over the input space and empirically confirm this approach yields NNs that are orders of magnitude fairer than state-of-the-art methods.

Handling and Presenting Harmful Text

TLDR
Practical advice is provided on how textual harms should be handled, presented, and discussed and H ARM C HECK, a resource for reflecting on research into textual harms, is introduced to encourage ethical, responsible, and respectful research in the NLP community.

A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

TLDR
This paper evaluates different bias measures and proposes the use of retrieval metrics to image-text representations via a bias measuring framework and investigates debiasing methods, showing that optimizing for adversarial loss via learnable token embeddings minimizes various bias measures without substantially degrading feature representations.

Expressive Communication: Evaluating Developments in Generative Models and Steering Interfaces for Music Creation

There is an increasing interest from ML and HCI communities in empowering creators with better generative models and more intuitive interfaces with which to control them. In music, ML researchers

A Survey on Gender Bias in Natural Language Processing

TLDR
A survey of 304 papers onGender bias in natural language processing finds that research on gender bias suffers from four core limitations and sees overcoming these limitations as a necessary development in future research.

References

SHOWING 1-10 OF 55 REFERENCES

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

TLDR
Recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, and carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values are provided.

XLNet: Generalized Autoregressive Pretraining for Language Understanding

TLDR
XLNet is proposed, a generalized autoregressive pretraining method that enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and overcomes the limitations of BERT thanks to its autore progressive formulation.

The Stanford CoreNLP Natural Language Processing Toolkit

TLDR
The design and use of the Stanford CoreNLP toolkit is described, an extensible pipeline that provides core natural language analysis, and it is suggested that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage.

Explaining the Gender Wage Gap

  • URL https://www.americanprogress.org/issues/economy/reports/2014/05/19/ 90039/explaining-the-gender-wage-gap/
  • 2014

Where women work-an analysis by industry and occupation

  • Monthly Lab. Rev.,
  • 1974

Bad Seeds: Evaluating Lexical Methods for Bias Measurement

TLDR
The different types of social biases and linguistic features that, once encoded in the seeds, can affect subsequent bias measurements are enumerated.

Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets

TLDR
It is found that benchmark datasets consisting of pairs of contrastive sentences frequently lack clear articulations of what is being measured, and a range of ambiguities and unstated assumptions that affect how these benchmarks conceptualize and operationalize stereotyping are highlighted.

Investigating Gender Bias in BERT

TLDR
This paper focuses on a popular CLM, BERT, and proposes an algorithm that finds fine-grained gender directions, i.e., one primary direction for each BERT layer, which obviates the need of realizing gender subspace in multiple dimensions and prevents other crucial information from being omitted.

DeBERTa: Decoding-enhanced BERT with Disentangled Attention

TLDR
A new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) is proposed that improves the BERT and RoBERTa models using two novel techniques that significantly improve the efficiency of model pre-training and performance of downstream tasks.

Language (Technology) is Power: A Critical Survey of “Bias” in NLP

TLDR
A greater recognition of the relationships between language and social hierarchies is urged, encouraging researchers and practitioners to articulate their conceptualizations of “bias” and to center work around the lived experiences of members of communities affected by NLP systems.
...