Alexa, Google, Siri: What are Your Pronouns? Gender and Anthropomorphism in the Design and Perception of Conversational Assistants

  title={Alexa, Google, Siri: What are Your Pronouns? Gender and Anthropomorphism in the Design and Perception of Conversational Assistants},
  author={Gavin Abercrombie and Amanda Cercas Curry and Mugdha Pandya and Verena Rieser},
Technology companies have produced varied responses to concerns about the effects of the design of their conversational AI systems. Some have claimed that their voice assistants are in fact not gendered or human-like—despite design features suggesting the contrary. We compare these claims to user perceptions by analysing the pronouns they use when referring to AI assistants. We also examine systems’ responses and the extent to which they generate output which is gendered and anthropomorphic. We… 

Tables from this paper

From Assistants to Friends: Investigating Emotional Intelligence of IPAs in Hindi and English
This study measures the emotional intelligence (EI) displayed by IPAs in the English and Hindi languages and proposes a quantitative and qualitative evaluation scheme encompassing new criteria from social science perspectives and IPA-specific features.
”I Like You, as a Friend”: Voice Assistants’ Response Strategies to Sexual Harassment and Their Relation to Gender
Sexual harassment towards voice assistants continues to be prevalent with up to 10% of interactions being abusive – often with sexual overtones. Voice assistants are predominantly modeled as female
The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems
The Moral Integrity Corpus, MIC, is a resource, which captures the moral assumptions of 38k prompt-reply pairs, using 99k distinct Rules of Thumb (RoTs), and is suggested that MIC will be a useful resource for understanding and language models’ implicit moral assumptions and flexibly benchmarking the integrity of conversational agents.
Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and Their Implications
There are many ways to express similar things in text, which makes evaluating natural language generation (NLG) systems difficult. Compounding this difficulty is the need to assess varying quality
Theories of “Gender” in NLP Bias Research
The rise of concern around Natural Language Processing (NLP) technologies containing and perpetuating social biases has led to a rich and rapidly growing area of research. Gender bias is one of the
SafetyKit: First Aid for Measuring Safety in Open-domain Conversational Systems
This position paper surveys the problem of safety for end-to-end conversational AI, introducing a taxonomy of three observed phenomena: the Instigator, Yea-Sayer, and Impostor effects, and empirically assess the extent to which current tools can measure these effects and current systems display them.
How artificiality and intelligence affect voice assistant evaluations
Widespread, and growing, use of artificial intelligence (AI)–enabled voice assistants (VAs) creates a pressing need to understand what drives VA evaluations. This article proposes a new framework
Owning Mistakes Sincerely: Strategies for Mitigating AI Errors
Interactive AI systems such as voice assistants are bound to make errors because of imperfect sensing and reasoning. Prior human-AI interaction research has illustrated the importance of various
Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling
This paper surveys the problem landscape for safety for end-to-end conversational AI, highlights tensions between values, potential positive impact and potential harms, and provides a framework for making decisions about whether and how to release these models, following the tenets of value-sensitive design.


Gender Ambiguous, not Genderless: Designing Gender in Voice User Interfaces (VUIs) with Sensitivity
This paper outlines how gender is not inherent in voice - listeners assign gender to voice, and highlights that gender is constructed through a multitude of resources.
Gender Bias in Chatbot Design
It is argued that there is a gender bias in the design of chatbots in the wild, particularly evident in three application domains (i.e., branded conversations, customer service, and sales).
Personification of the Amazon Alexa: BFF or a Mindless Companion
A study of Amazon Alexa usage and explored the manifestations and possible correlates of users' personification of Alexa to understand whether expressions of personifications are caused by users' emotional attachments or skepticism about technology's intelligence.
Chameleons in Imagined Conversations: A New Approach to Understanding Coordination of Linguistic Style in Dialogs
It is argued that fictional dialogs offer a way to study how authors create the conversations but don't receive the social benefits (rather, the imagined characters do), and significant coordination across many families of function words in the large movie-script corpus is found.
Conversational Assistants and Gender Stereotypes: Public Perceptions and Desiderata for Voice Personas
Conversational voice assistants are rapidly developing from purely transactional systems to social companions with “personality”. UNESCO recently stated that the female and submissive personality of
Genie in the Bottle: Anthropomorphized Perceptions of Conversational Agents
It is demonstrated that the anthropomorphized behavioral and visual perceptions of agents yield structural consistency and how these perceptions are linked with each other and system features are discussed.
Revealing Persona Biases in Dialogue Systems
It is observed that adopting personas can actually decrease harmful responses, compared to not using any personas, and it is found that persona choices can affect the degree of harms in generated responses and thus should be systematically evaluated before deployment.
Personalizing Dialogue Agents: I have a dog, do you have pets too?
This work collects data and train models tocondition on their given profile information; and information about the person they are talking to, resulting in improved dialogues, as measured by next utterance prediction.
Voice-Based Agents as Personified Things: Assimilation and Accommodation as Equilibration of Doubt
We aim to investigate the nature of doubt regarding voice-based agents by referring to Piaget’s ontological object–subject classification “thing” and “person,” its associated equilibration processes,
These are not the Stereotypes You are Looking For: Bias and Fairness in Authorial Gender Attribution
This work explores the issue of author gender in two datasets of Dutch literary novels using commonly used descriptive and predictive methods, and shows the importance of controlling for variables in the corpus.