Mental Models of AI Agents in a Cooperative Game Setting

@article{Gero2020MentalMO,
  title={Mental Models of AI Agents in a Cooperative Game Setting},
  author={K. Gero and Zahra Ashktorab and Casey Dugan and Qian Pan and James Johnson and Werner Geyer and Maria Ruiz and Sarah Miller and David R. Millen and Murray Campbell and Sadhana Kumaravel and Wei Zhang},
  journal={Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
  year={2020}
}
  • K. Gero, Zahra Ashktorab, Wei Zhang
  • Published 21 April 2020
  • Computer Science
  • Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
As more and more forms of AI become prevalent, it becomes increasingly important to understand how people develop mental models of these systems. In this work we study people's mental models of AI in a cooperative word guessing game. We run think-aloud studies in which people play the game with an AI agent; through thematic analysis we identify features of the mental models developed by participants. In a large-scale study we have participants play the game with the AI agent online and use a… 

Figures and Tables from this paper

Human-AI Collaboration in a Cooperative Game Setting
TLDR
This paper investigates human-AI collaboration in the context of a collaborative AI-driven word association game with partially observable information and finds that when participants believe their partners were human, they found their partners to be more likeable, intelligent, creative and having more rapport and use more positive words to describe their partner's attributes.
Effects of Communication Directionality and AI Agent Differences in Human-AI Interaction
TLDR
This paper investigates social perceptions of AI agents with various directions of communication in a cooperative game setting and finds that the bias against the AI varies with the direction of the communication and with the AI agent.
Exploring Data-Driven Components of Socially Intelligent AI through Cooperative Game Paradigms
The development of new approaches for creating more “life-like” artificial intelligence (AI) capable of natural social interaction is of interest to a number of scientific fields, from virtual
Player-AI Interaction: What Neural Network Games Reveal About AI as Play
TLDR
It is argued that games are an ideal domain for studying and experimenting with how humans interact with AI, and game and UX designers should consider flow to structure the learning curve of human-AI interaction, incorporate discovery-based learning to play around with the AI and observe the consequences.
Understanding Mental Models of AI through Player-AI Interaction
TLDR
It is presented the position that AI-based games, particularly the player-AI interaction component, offer an ideal domain to study the process in which mental models evolve, and a case study to illustrate the benefits of this approach for explainable AI.
Towards Mutual Theory of Mind in Human-AI Interaction: How Language Reflects What Students Perceive About a Virtual Teaching Assistant
TLDR
It is found that students’ perception of Jill Watson’s anthropomorphism and intelligence changed significantly over time, andRegression analyses reveal that linguistic verbosity, readability, sentiment, diversity, and adaptability reflect student perception of JW.
Play for Real(ism) - Using Games to Predict Human-AI interactions in the Real World
TLDR
The design of the Human-AI Decision Evaluation System (HADES) is described, a test harness capable of interfacing with a game environment, simulating the behavior of an AI-enabled decision support system, and collecting the results of human decision making based upon such a system's predictions.
Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies
TLDR
The need to develop common frameworks to account for the design and research spaces of human-AI decision making is highlighted, so that researchers can make rigorous choices in study design, and the research community can build on each other’s work and produce generalizable scientific knowledge.
The Design and Development of Games with a Purpose for AI Systems
TLDR
The design and development of building two games with a purpose, Guess the Word and Fool the AI, designed to collect data from both crowdworkers and domain experts for two very different machine learning problems are discussed.
NPCAMSD-agent: a prospective agent model
TLDR
The implementation of the model and experiment prove that the mental model has low contract violation rate, high profits in cooperations and works well for the diverse contemporary business cooperations.
...
1
2
3
...

References

SHOWING 1-10 OF 30 REFERENCES
Tell me more?: the effects of mental model soundness on personalizing an intelligent agent
TLDR
The results suggest that by helping end users understand a system's reasoning, intelligent agents may elicit more and better feedback, thus more closely aligning their output with each user's intentions.
Evaluating Visual Conversational Agents via Cooperative Human-AI Games
TLDR
A cooperative game - GuessWhich - is designed to measure human-AI team performance in the specific context of the AI being a visual conversational agent, and a counterintuitive trend is suggested - that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, it is found that this improvement in AI-AI performance does not translate to improved human- AI performance.
Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance
TLDR
This work highlights two key properties of an AI’s error boundary, parsimony and stochasticity, and a property of the task, dimensionality, and shows experimentally how these properties affect humans’ mental models of AI capabilities and the resulting team performance.
Too much, too little, or just right? Ways explanations impact end users' mental models
TLDR
It is suggested that completeness is more important than soundness: increasing completeness via certain information types helped participants' mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations.
Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff
TLDR
It is shown that updates that increase AI performance may actually hurt team performance, and a re-training objective is proposed to improve the compatibility of an update by penalizing new errors.
Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols
TLDR
This work studies a setting where two agents engage in playing a referential game and develops a communication protocol necessary to succeed in this game, requiring that messages they exchange are in the form of a language.
Explanation in Artificial Intelligence: Insights from the Social Sciences
Comparing Models of Associative Meaning: An Empirical Investigation of Reference in Simple Language Games
TLDR
It is found that listeners’ behavior reflects direct bigram collocational associations more strongly than word-embedding or semantic knowledge graph-based associations and that there is little evidence for pragmatically sophisticated behavior on the part of either speakers or listeners in this simplified version of the popular game Codenames.
Explaining Explanations in AI
TLDR
This work contrasts the different schools of thought on what makes an explanation in philosophy and sociology, and suggests that machine learning might benefit from viewing the problem more broadly.
I Drive - You Trust: Explaining Driving Behavior Of Autonomous Cars
TLDR
This work evaluates mental models that experts and non-expert users have of autonomous driving to provide an explanation of the vehicle's past driving behavior to identify a target mental model that enhances the user's mental model by adding key components from the mental model experts have.
...
1
2
3
...