• Corpus ID: 247025605

Speciesist bias in AI - How AI applications perpetuate discrimination and unfair outcomes against animals

@article{Hagendorff2022SpeciesistBI,
  title={Speciesist bias in AI - How AI applications perpetuate discrimination and unfair outcomes against animals},
  author={Thilo Hagendorff and Leonie Bossert and Tse Yip Fai and Peter Singer},
  journal={ArXiv},
  year={2022},
  volume={abs/2202.10848}
}
Massive efforts are made to reduce biases in both data and algorithms in order to render AI applications fair. These efforts are propelled by various high-profile cases where biased algorithmic decision-making caused harm to women, people of color, minorities, etc. However, the AI fairness field still succumbs to a blind spot, namely its insensitivity to discrimination against animals. This paper is the first to describe the ‘speciesist bias’ and investigate it in several different AI systems… 

Figures and Tables from this paper

Why we need biased AI - How including cognitive and ethical machine biases can enhance AI systems
This paper stresses the importance of biases in the field of artificial intelligence (AI) in two regards. First, in order to foster efficient algorithmic decision-making in complex, unstable, and
Speciesist Language and Nonhuman Animal Bias in English Masked Language Models
TLDR
This paper analyzes biases to nonhuman animals, i.e. speciesist bias, inherent in English Masked Language Models using template-based and corpus-extracted sentences which contain speciesist (or non-speciesist) language, to show that these models tend to associate harmful words with non human animals.

References

SHOWING 1-10 OF 169 REFERENCES
A Survey on Bias and Fairness in Machine Learning
TLDR
This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.
Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data
TLDR
Three potential approaches to deal with knowledge and information deficits in fairness issues that are emergent properties of complex sociotechnical systems are presented and discussed.
AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
TLDR
A new open source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license to help facilitate the transition of fairness research algorithms to use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms.
Delphi: Towards Machine Ethics and Norms
TLDR
The first major attempt to computationally explore the vast space of moral implications in real-world settings is conducted, with Delphi, a unified model of descriptive ethics empowered by diverse data of people’s moral judgment from COMMONSENSE NORM BANK.
Blind spots in AI ethics
This paper critically discusses blind spots in AI ethics. AI ethics discourses typically stick to a certain set of topics concerning principles evolving mainly around explainability, fairness, and
StereoSet: Measuring stereotypical bias in pretrained language models
TLDR
StereoSet, a large-scale natural English dataset to measure stereotypical biases in four domains: gender, profession, race, and religion, is presented and it is shown that popular models like BERT, GPT-2, RoBERTa, and XLnet exhibit strong stereotypical biases.
Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency
TLDR
An extensive analysis using formalized group fairness metrics finds systematic disparities in cropping and identifies contributing factors, including the fact that the cropping based on the single most salient point can amplify the disparities because of an effect the authors term argmax bias.
Diagnosing Gender Bias in Image Recognition Systems
TLDR
This article evaluates potential gender biases of commercial image recognition platforms using photographs of U.S. members of Congress and a large number of Twitter images posted by these politicians to find that images of women received three times more annotations related to physical appearance.
Bias in machine learning - what is it good for?
TLDR
It is concluded that there is a complex relation between bias occurring in the machine learning pipeline that leads to a model, and the eventual bias of the model (which is typically related to social discrimination).
...
...