Overcoming Referential Ambiguity in Language-Guided Goal-Conditioned Reinforcement Learning

  title={Overcoming Referential Ambiguity in Language-Guided Goal-Conditioned Reinforcement Learning},
  author={Hugo Caselles-Dupr'e and Olivier Sigaud and Mohamed Chetouani},
Teaching an agent to perform new tasks using natural language can easily be hin-dered by ambiguities in interpretation. When a teacher provides an instruction to a learner about an object by referring to its features, the learner can misunder-stand the teacher’s intentions, for instance if the instruction ambiguously refer to features of the object, a phenomenon called referential ambiguity . We study how two concepts derived from cognitive sciences can help resolve those referential… 

Figures from this paper



Learning from natural instructions

The process of learning a decision function as a natural language lesson interpretation problem, as opposed to learning from labeled examples, is suggested to view.

Reinforcement Learning for Mapping Instructions to Actions

This paper presents a reinforcement learning approach for mapping natural language instructions to sequences of executable actions, and uses a policy gradient algorithm to estimate the parameters of a log-linear model for action selection.

Inferring Rewards from Language in Context

On a new interactive flight–booking task with natural language, the model more accurately infers rewards and predicts optimal actions in unseen environments, in comparison to past work that first maps language to actions and then maps actions to rewards.

Pedagogical Demonstrations and Pragmatic Learning in Artificial Tutor-Learner Interactions

This paper introduces pedagogy teaching and pragmatic learning mechanisms that are general enough to be applied to any artificial agent policy learning scenario (multi-arm bandit, evolutionary strategies, reinforcement learning) and shows substantial improvements over standard learning from demonstrations.

Help Me Explore: Minimal Social Interventions for Graph-Based Autotelic Agents

In the quest for autonomous agents learning open-ended repertoires of skills, most works take a Piagetian perspective: learning trajectories are the results of interactions between developmental

Inferential social learning: cognitive foundations of human social learning and teaching

Towards Teachable Autonomous Agents

The purpose of this paper is to elucidate the key obstacles standing in the way towards the design of teachable and autonomous agents and focus on autotelic agents, i.e. agents equipped with forms of intrinsic motivations that enable them to represent, self-generate and pursue their own goals.

Intrinsically Motivated Goal-Conditioned Reinforcement Learning: a Short Survey

A typology of methods where deep RL algorithms are trained to tackle the developmental robotics problem of the autonomous acquisition of open-ended repertoires of skills is proposed at the intersection of deep RL and developmental approaches.

Grounding Language to Autonomously-Acquired Skills via Goal Generation

This work proposes a new conceptual approach to language-conditioned RL: the Language-Goal-Behavior architecture (LGB), which decouples skill learning and language grounding via an intermediate semantic representation of the world.