• Corpus ID: 3454285

Machine Theory of Mind

@article{Rabinowitz2018MachineTO,
  title={Machine Theory of Mind},
  author={Neil C. Rabinowitz and Frank Perbet and H. Francis Song and Chiyuan Zhang and S. M. Ali Eslami and Matthew M. Botvinick},
  journal={ArXiv},
  year={2018},
  volume={abs/1802.07740}
}
Theory of mind (ToM; Premack & Woodruff, 1978) broadly refers to humans' ability to represent the mental states of others, including their desires, beliefs, and intentions. We propose to train a machine to build such models too. We design a Theory of Mind neural network -- a ToMnet -- which uses meta-learning to build models of the agents it encounters, from observations of their behaviour alone. Through this process, it acquires a strong prior model for agents' behaviour, as well as the… 
Theory of Mind From Observation in Cognitive Models and Humans.
TLDR
A cognitive ToM framework that uses a well-known theory of decisions from experience to construct a computational representation of ToM is proposed and the potential of the IBL observer model to improve human-machine interactions is discussed.
Cognitive Machine Theory of Mind
TLDR
A theoreticallygrounded, pre-existent cognitive model is used to demonstrate the development of ToM from observation of other agents’ behavior and the IBL observer is able to infer the agent’s false belief and pass a classic ToM test commonly used in humans.
MULTI-AGENT REINFORCEMENT LEARNING
Humans are capable of attributing latent mental contents such as beliefs, or intentions to others. The social skill is critical in everyday life to reason about the potential consequences of their
A Brain-Inspired Model of Theory of Mind
TLDR
A Brain-inspired Model of Theory of Mind (Brain-ToM model) is proposed, and the model is applied to a humanoid robot to challenge the false belief tasks, two classical tasks designed to understand the mechanisms of ToM from Cognitive Psychology.
Modeling Theory of Mind in Multi-Agent Games Using Adaptive Feedback Control
TLDR
This work proposes embodied and situated agent models based on distributed adaptive control theory to predict actions of other agents in five different game theoretic tasks and shows that, compared to pure reinforcement-based strategies, probabilistic learning agents modeled on rational, predictive and other's-model phenotypes perform better in game-theoretic metrics across tasks.
Experiments in Artificial Theory of Mind: From Safety to Story-Telling
  • A. Winfield
  • Medicine, Computer Science
    Front. Robot. AI
  • 2018
TLDR
This paper proposes a simulation-based internal model that equips a robot with an internal model of itself and its environment, including other dynamic actors, which can test the robot's next possible actions and hence anticipate the likely consequences of those actions both for itself and others.
Deep Interpretable Models of Theory of Mind
TLDR
This work develops an interpretable modular neural framework for modeling the intentions of other observed entities and demonstrates the efficacy of the approach with experiments on data from human participants on a search and rescue task in Minecraft.
Probabilistic Recursive Reasoning for Multi-Agent Reinforcement Learning
TLDR
Under the PR2 framework, decentralized-training-decentralized-execution algorithms are developed that are proved to converge in the self-play scenario when there is one Nash equilibrium and experiments show that it is critical to reason about how the opponents believe about what the agent believes.
Deep Interpretable Models of Theory of Mind For Human-Agent Teaming
TLDR
This work develops an interpretable modular neural framework for modeling the intentions of other observed entities and demonstrates the efficacy of the approach with experiments on data from human participants on a search and rescue task in Minecraft.
Theory of Minds: Understanding Behavior in Groups Through Inverse Planning
TLDR
This work develops a generative model of multi-agent action understanding based on a novel representation for these latent relationships called Composable Team Hierarchies (CTH), grounded in the formalism of stochastic games and multi- agent reinforcement learning.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 79 REFERENCES
Rational quantitative attribution of beliefs, desires and percepts in human mentalizing
Social cognition depends on our capacity for ‘mentalizing’, or explaining an agent’s behaviour in terms of their mental states. The development and neural substrates of mentalizing are well-studied,
Bayesian Theory of Mind: Modeling Joint Belief-Desire Attribution
TLDR
This work presents a computational framework for understanding The- ory of Mind (ToM): the human capacity for reasoning about agents’ mental states such as beliefs and desires, and expresses the predictive model of belief- and desire-dependent action at the heart of ToM as a partially observable Markov decision process (POMDP), and reconstructs an agent’s joint belief state and reward state using Bayesian inference.
Mirror neurons and the simulation theory of mind-reading
TLDR
The activity of mirror neurons, and the fact that observers undergo motor facilitation in the same muscular groups as those utilized by target agents, are findings that accord well with simulation theory but would not be predicted by theory theory.
Psychological Reasoning in Infancy.
TLDR
This evidence indicates that when infants observe an agent act in a simple scene, they infer the agent's mental states and then use these mental states, together with a principle of rationality (and its corollaries of efficiency and consistency), to predict and interpret theAgent's subsequent actions and to guide their own actions toward the agent.
Building machines that learn and think like people
TLDR
It is argued that truly human-like learning and thinking machines should build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems, and harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations.
Game Theory of Mind
TLDR
It is shown it is possible to deduce whether players make inferences about each other and quantify their sophistication on the basis of choices in sequential games, and exactly the same sophisticated behaviour can be achieved by optimising the utility function itself (through prosocial utility), producing unsophisticated but apparently altruistic agents.
The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology
We propose that human social cognition is structured around a basic understanding of ourselves and others as intuitive utility maximizers: from a young age, humans implicitly assume that agents
Help or Hinder: Bayesian Models of Social Goal Inference
TLDR
A model for how people can infer social goals from actions, based on inverse planning in multiagent Markov decision problems (MDPs), is proposed and behavioral evidence is presented in support of this model over a simpler, perceptual cue-based alternative.
Modeling Human Understanding of Complex Intentional Action with a Bayesian Nonparametric Subgoal Model
TLDR
This work model how humans infer subgoals from observations of complex action sequences using a nonparametric Bayesian model, which assumes that observed actions are generated by approximately rational planning over unknown subgoal sequences.
Learning the Preferences of Ignorant, Inconsistent Agents
TLDR
A behavioral experiment in which human subjects perform preference inference given the same observations of choices as the model is presented, showing that human subjects explain choices in terms of systematic deviations from optimal behavior and suggesting that they take such deviations into account when inferring preferences.
...
1
2
3
4
5
...