Corpus ID: 210164606

Exploiting Language Instructions for Interpretable and Compositional Reinforcement Learning

  title={Exploiting Language Instructions for Interpretable and Compositional Reinforcement Learning},
  author={M. V. D. Meer and M. Pirotta and Elia Bruni},
  • M. V. D. Meer, M. Pirotta, Elia Bruni
  • Published 2020
  • Computer Science
  • ArXiv
  • In this work, we present an alternative approach to making an agent compositional through the use of a diagnostic classifier. Because of the need for explainable agents in automated decision processes, we attempt to interpret the latent space from an RL agent to identify its current objective in a complex language instruction. Results show that the classification process causes changes in the hidden states which makes them more easily interpretable, but also causes a shift in zero-shot… CONTINUE READING

    Figures, Tables, and Topics from this paper

    Explore Further: Topics Discussed in This Paper


    Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
    • 125
    • PDF
    Learning to Understand Goal Specifications by Modelling Reward
    • 63
    • PDF
    Visualizing and Understanding Atari Agents
    • 113
    • PDF
    BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning
    • 42
    • PDF
    Natural Language Communication with Robots
    • 66
    • PDF
    Graying the black box: Understanding DQNs
    • 144
    • PDF
    Reverse Curriculum Generation for Reinforcement Learning
    • 199
    • PDF
    Interactive Grounded Language Acquisition and Generalization in a 2D World
    • 46
    • PDF
    Learning to Follow Navigational Directions
    • 180
    • PDF