Why we need biased AI - How including cognitive and ethical machine biases can enhance AI systems

  title={Why we need biased AI - How including cognitive and ethical machine biases can enhance AI systems},
  author={Sarah Fabi and Thilo Hagendorff},
This paper stresses the importance of biases in the field of artificial intelligence (AI) in two regards. First, in order to foster efficient algorithmic decision-making in complex, unstable, and uncertain real-world environments, we argue for the structurewise implementation of human cognitive biases in learning algorithms. Secondly, we argue that in order to achieve ethical machine behavior, filter mechanisms have to be applied for selecting biased training stimuli that represent social or… 

Exploring the Racial Bias in Pain Detection with a Computer Vision Model

People detect painful expressions more easily in members of their racial ingroup than outgroup. Here, we wanted to investigate this racial bias with a machine learning model trained to detect



Speciesist bias in AI - How AI applications perpetuate discrimination and unfair outcomes against animals

It is found that speciesist biases are solidified by many mainstream AI applications, especially in the fields of computer vision as well as natural language processing, and this is the first to describe the ‘speciesist bias’ and investigate it in several different AI systems.

Adaptive rationality: An evolutionary perspective on cognitive bias

A casual look at the literature in social cognition reveals a vast collection of biases, errors, violations of rational choice, and failures to maximize utility. One is tempted to draw the conclusion

A Neural Network Framework for Cognitive Bias

A neural network framework for cognitive biases is proposed, which explains why the authors' brain systematically tends to default to heuristic (‘Type 1’) decision making and provides a unifying and binding framework for many cognitive bias phenomena.

A Survey on Bias and Fairness in Machine Learning

This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.

Inductive Biases for Deep Learning of Higher-Level Cognition

This work considers a larger list of inductive biases that humans and animals exploit, focusing on those which concern mostly higher-level and sequential conscious processing, and suggests they could potentially help build AI systems benefiting from humans' abilities in terms of flexible out-of-distribution and systematic generalization.

Homo Heuristicus: Why Biased Minds Make Better Inferences

The study of heuristics shows that less information, computation, and time can in fact improve accuracy, in contrast to the widely held view that less processing reduces accuracy.

Models that learn how humans learn: The case of decision-making and its disorders

An alternative method using recurrent neural networks (RNNs) to generate a flexible family of models that have sufficient capacity to represent the complex learning and decision- making strategies used by humans is suggested.

Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources

It is demonstrated that resource-rational models can reconcile the mind's most impressive cognitive skills with people's ostensive irrationality, and provides a new way to connect psychological theory more deeply with artificial intelligence, economics, neuroscience, and linguistics.

How to Make Cognitive Illusions Disappear: Beyond “Heuristics and Biases”

Most so-called "errors" in probabilistic reasoning are in fact not violations of probability theory. Examples of such "errors" include overconfi dence bias, conjunction fallacy, and base-rate