• Corpus ID: 238531780

Graph Convolutional Memory using Topological Priors

  title={Graph Convolutional Memory using Topological Priors},
  author={Steven D. Morad and Stephan Liwicki and Ryan Kortvelesy and Roberto Mecca and Amanda Prorok},
Solving partially-observable Markov decision processes (POMDPs) is critical when applying reinforcement learning to real-world problems, where agents have an incomplete view of the world. We present graph convolutional memory (GCM)1, the first hybrid memory model for solving POMDPs using reinforcement learning. GCM uses either human-defined or data-driven topological priors to form graph neighborhoods, combining them into a larger network topology using dynamic programming. We query the graph… 



Neural Algorithms for Graph Navigation

This work presents a framework for graph meta-learning, and proposes an agent equipped with external memory and local action priors adapted to the underlying graphs, showing substantially improvement in one-shot performance over baseline agents.

Graph Attention Memory for Visual Navigation

Experimental results show that the GAM-based navigation system significantly improves learning efficiency and outperforms all baselines in average success rate.

Sparse Graphical Memory for Robust Planning

SGM is introduced, a new data structure that stores states and feasible transitions in a sparse memory that significantly outperforms current state of the art methods on long horizon, sparse-reward visual navigation tasks.

Neural Map: Structured Memory for Deep Reinforcement Learning

This paper develops a memory system with an adaptable write operator that is customized to the sorts of 3D environments that DRL agents typically interact with and demonstrates empirically that the Neural Map surpasses previous DRL memories on a set of challenging 2D and 3D maze environments.

Semi-parametric Topological Memory for Navigation

A new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals, that consists of a (non-parametric) graph with nodes corresponding to locations in the environment and a deep network capable of retrieving nodes from the graph based on observations.

Autonomous Exploration Under Uncertainty via Deep Reinforcement Learning on Graphs

This work proposes a novel approach that uses graph neural networks (GNNs) in conjunction with deep reinforcement learning (DRL), enabling decision-making over graphs containing exploration information to predict a robot's optimal sensing action in belief space.

Bayesian Relational Memory for Semantic Visual Navigation

We introduce a new memory architecture, Bayesian Relational Memory (BRM), to improve the generalization ability for semantic visual navigation agents in unseen environments, where an agent is given a

A Behavioral Approach to Visual Navigation with Graph Localization Networks

A behavioral approach for visual navigation using topological maps using graph neural networks for localizing the agent in the map, and decompose the action space into primitive behaviors implemented as convolutional or recurrent neural networks.

Control of Memory, Active Perception, and Action in Minecraft

These tasks are designed to emphasize, in a controllable manner, issues that pose challenges for RL methods including partial observability, delayed rewards, high-dimensional visual observations, and the need to use active perception in a correct manner so as to perform well in the tasks.

Memory-based control with recurrent neural networks

This work extends two related, model-free algorithms for continuous control to solve partially observed domains using recurrent neural networks trained with backpropagation through time to find that recurrent deterministic and stochastic policies are able to learn similarly good solutions to these tasks, including the water maze where the agent must learn effective search strategies.