Cultural Incongruencies in Artificial Intelligence

  title={Cultural Incongruencies in Artificial Intelligence},
  author={Vinodkumar Prabhakaran and Rida Qadri and Benton C. Hutchinson},
Artificial intelligence (AI) systems attempt to imitate human behavior. How well they do this imitation is often used to assess their utility and to attribute human-like (or artificial) intelligence to them. However, most work on AI refers to and relies on human intelligence without accounting for the fact that human behavior is inherently shaped by the cultural contexts they are embedded in, the values and beliefs they hold, and the social practices they follow. Additionally, since AI… 



The Importance of Modeling Social Factors of Language: Theory and Practice

It is shown that current NLP systems systematically break down when faced with interpreting the social factors of language, which limits applications to a subset of information-related tasks and prevents NLP from reaching human-level performance.

Evaluation Gaps in Machine Learning Practice

The evaluation gaps between the idealized breadth of evaluation concerns and the observed narrow focus of actual evaluations are examined, pointing the way towards more contextualized evaluation methodologies for robustly examining the trustworthiness of ML models.

On the genealogy of machine learning datasets: A critical history of ImageNet

A critical history of ImageNet is presented as an exemplar, utilizing critical discourse analysis of major texts around ImageNet’s creation and impact, and it is found that assumptions around Image net and other large computer vision datasets more generally rely on three themes: the aggregation and accumulation of more data, the computational construction of meaning, and making certain types of data labor invisible.

LaMDA: Language Models for Dialog Applications

It is demonstrated that fine-tuning with annotated data and enabling the model to consult external knowledge sources can lead to significant improvements towards the two key challenges of safety and factual grounding.

Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling

This paper surveys the problem landscape for safety for end-to-end conversational AI, highlights tensions between values, potential positive impact and potential harms, and provides a framework for making decisions about whether and how to release these models, following the tenets of value-sensitive design.

Studying Those Who Study Us: An Anthropologist in the World of Artificial Intelligence

Studying Those Who Study Us: An Anthropologist in the World of Artificial Intelligence. Diana E. Forsythe. Edited, with introduction by David Hess. Stanford: Stanford University Press, 2001. xxix.

Challenges and Strategies in Cross-Cultural NLP

Various efforts in the Natural Language Processing (NLP) community have been made to accommodate linguistic diversity and serve speakers of many different languages. However, it is important to

Detecting Cross-Geographic Biases in Toxicity Modeling on Social Media

A weakly supervised method to robustly detect lexical biases in broader geo-cultural contexts is introduced and it is demonstrated that these groupings reflect human judgments of offensive and inoffensive language in those geographic contexts.

The order of things : an archaeology of the human sciences

Publishers Note, Forward to the English Edition, Preface Part I: 1.Las Meninas 2.The Prose of the World: I The Four Similitudes, II Signatures, III The Limits of the World, IV the Writing of Things,

Cultural Indicators: The Case of Violence in Television Drama

The cultural transformation of our time stems from the extension of the industrial-technological revolution into the sphere of message-production. The mass production and rapid distribution of