Exploration Based Language Learning for Text-Based Games

@inproceedings{Madotto2020ExplorationBL,
  title={Exploration Based Language Learning for Text-Based Games},
  author={Andrea Madotto and Mahdi Namazifar and Joost Huizinga and Piero Molino and Adrien Ecoffet and Huaixiu Zheng and Alexandros Papangelis and Dian Yu and Chandra Khatri and Gokhan Tur},
  booktitle={International Joint Conference on Artificial Intelligence},
  year={2020}
}
This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games. These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents. Moreover, they provide a learning setting in which these skills can be acquired through interactions with an environment rather than using fixed corpora. One aspect that makes these games… 

Figures and Tables from this paper

A Survey of Text Games for Reinforcement Learning Informed by Natural Language

This survey introduces the challenges in Text Game Reinforcement Learning problems, outlines the generation tools for rendering Text Games and the subsequent environments generated, and compares the agent architectures currently applied to provide a systematic review of benchmark methodologies and opportunities for future researchers.

Conceptual Reinforcement Learning for Language-Conditioned Tasks

A conceptual reinforcement learning (CRL) framework to learn the concept-like joint representation for language-conditioned policy and proves that concepts are compact and invariant representations in human cognition through extracting similarities from numerous instances in real-world.

A Song of Ice and Fire: Analyzing Textual Autotelic Agents in ScienceWorld

The importance of selectivity from the social peer's feedback is shown; that experience replay needs to over-sample examples of rare goals; and that following self-generated goal sequences where the agent's competence is intermediate leads to significant improvements in final performance.

Wordplay 2022 The 3rd Wordplay: When Language Meets Games Workshop Proceedings of the Workshop

Novel techniques to generate text in a particular style are described, providing an approach of generating engaging naturalistic conversation responses using knowledge generated by pre-trained language models, considering their recent success in a multitude of NLP tasks.

Generative Personas That Behave and Experience Like Humans

Using artificial intelligence (AI) to automatically test a game remains a critical challenge for the development of richer and more complex game worlds and for the advancement of AI at large. One of

TextWorldExpress: Simulating Text Games at One Million Steps Per Second

This work presents TextWorldExpress, a high-performance simulator that includes implementations of three common text game benchmarks that increases simulation throughput by approximately three orders of magnitude, reaching over one million steps per second on common desktop hardware.

Automatic Exploration of Textual Environments with Language-Conditioned Autotelic Agents

This extended abstract identifies desirable properties of textual nenvironments that makes them a good testbed for autotelic agents, and lists drivers of exploration for such agents that would allow them to achieve large repertoires of skills in these environments, enabling such agents to be repurposed for solving the benchmarks implemented in textual environments.

Affordance Extraction with an External Knowledge Database for Text-Based Simulated Environments

The paper illustrates that, despite some challenges, external databases can in principle be used for affordance extraction.

Vygotskian Autotelic Artificial Intelligence: Language and Culture Internalization for Human-Like AI

Building autonomous artificial agents able to grow open-ended repertoires of skills across their lives is one of the fundamental goals of AI. To that end, a promising developmental approach recommends

L EARNING O BJECT-O RIENTED D YNAMICS FOR P LANNING FROM T EXT

This work proposes an Object-Oriented Text Dynamics (OOTD) model that enables planning algorithms to solve decision-making problems in text domains and develops variational objectives under the object- supervised and self-supervised settings to model the stochasticity of predicted dynamics.

References

SHOWING 1-10 OF 54 REFERENCES

Go-Explore: a New Approach for Hard-Exploration Problems

A new algorithm called Go-Explore, which exploits the following principles to remember previously visited states, solve simulated environments through any available means, and robustify via imitation learning, which results in a dramatic performance improvement on hard-exploration problems.

Deep Reinforcement Learning with a Natural Language Action Space

This paper introduces a novel architecture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based

Counting to Explore and Generalize in Text-based Games

A recurrent RL agent with an episodic exploration mechanism that helps discovering good policies in text-based game environments and observes that the agent learns policies that generalize to unseen games of greater difficulty.

Sequence to Sequence Learning with Neural Networks

This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.

In NeurIPS

  • pages 3104–3112,
  • 2014

First textworld problems: A reinforcement and language learning challenge

  • NeurIPS Workshop,
  • 2018

Action Assembly: Sparse Imitation Learning for Text Based Games with Combinatorial Action Spaces

A new compressed sensing algorithm, named IK-OMP, is introduced, which can be seen as an extension to the Orthogonal Matching Pursuit (OMP), which is incorporated into a supervised imitation learning setting and solves the entire text-based game of Zork1 with an action space of approximately 10 million actions given both perfect and noisy demonstrations.

Learn What Not to Learn: Action Elimination with Deep Reinforcement Learning

This work proposes the Action-Elimination Deep Q-Network (AE-DQN) architecture that combines a Deep RL algorithm with an Action Elimination Network (AEN) that eliminates sub-optimal actions.

Learning from delayed rewards

  • B. Kröse
  • Computer Science, Engineering
    Robotics Auton. Syst.
  • 1995

King's College

...