• Corpus ID: 17473440

Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks

@article{Foerster2016LearningTC,
  title={Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks},
  author={Jakob N. Foerster and Yannis Assael and Nando de Freitas and Shimon Whiteson},
  journal={ArXiv},
  year={2016},
  volume={abs/1602.02672}
}
We propose deep distributed recurrent Q-networks (DDRQN), which enable teams of agents to learn to solve communication-based coordination tasks. [] Key Result In addition, we present ablation experiments that confirm that each of the main components of the DDRQN architecture are critical to its success.

Learning to Communicate with Deep Multi-Agent Reinforcement Learning

By embracing deep neural networks, this work is able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability.

Learning to Communicate in Multi-Agent Reinforcement Learning : A Review

This work considers the issue of multiple agents learning to communicate through reinforcement learning within partially observable environments, with a focus on information asymmetry, and introduces the idea of an experimental setup to expose this cost in cooperative-competitive game.

R-MADDPG for Partially Observable Environments and Limited Communication

A deep recurrent multiagent actor-critic framework (R-MADDPG) for handling multiagent coordination under partial observable set-tings and limited communication and demonstrates that the resulting framework learns time dependencies for sharing missing observations, handling resource limitations, and developing different communication patterns among agents.

Multiagent Learning and Coordination with Clustered Deep Q-Network

A multiagent, multi-level solution named Clustered Deep Q-Network (CDQN) to overcome scalability issues due to the number of agents involved in deep reinforcement learning methods.

Learning when to Communicate at Scale in Multiagent Cooperative and Competitive Tasks

This paper presents Individualized Controlled Continuous Communication Model (IC3Net) which has better training efficiency than simple continuous communication model, and can be applied to semi-cooperative and competitive settings along with the cooperative settings.

Partially Observable Multi-Agent RL with Enhanced Deep Distributed Recurrent Q-Network

This article proposes a method based on importance sampling and addresses DDRQN's disadvantages which can't enable memory replay and results on the SC2LE environment confirm that this method significantly improve performance compared to original DDRQn.

Multi-agent Double Deep Q-Networks

This work proposes the Multi-agent Double Deep Q-Networks algorithm, an extension of DeepQNetworks to the multi-agent paradigm, and demonstrates how it can generalize to similar tasks and to larger teams, due to the strength of deep-learning techniques, and their viability for transfer learning approaches.

Learning Multiagent Communication with Backpropagation

A simple neural model is explored, called CommNet, that uses continuous communication for fully cooperative tasks and the ability of the agents to learn to communicate amongst themselves is demonstrated, yielding improved performance over non-communicative agents and baselines.

Multiagent cooperation and competition with deep reinforcement learning

The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multiagent systems coping with high-dimensional environments and describes the progression from competitive to collaborative behavior when the incentive to cooperate is increased.

Communication and Cooperation in Decentralized Multi-Agent Reinforcement Learning

This thesis tests the ability of an existing deep multi-agent actor-critic algorithm to cope with partially observable scenarios and proposes to adapt this algorithm to use recurrent neural networks, which enables using information from past steps to improve action-value predictions.
...

References

SHOWING 1-10 OF 45 REFERENCES

Multiagent cooperation and competition with deep reinforcement learning

The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multiagent systems coping with high-dimensional environments and describes the progression from competitive to collaborative behavior when the incentive to cooperate is increased.

Massively Parallel Methods for Deep Reinforcement Learning

This work presents the first massively distributed architecture for deep reinforcement learning, using a distributed neural network to represent the value function or behaviour policy, and a distributed store of experience to implement the Deep Q-Network algorithm.

Deep Recurrent Q-Learning for Partially Observable MDPs

The effects of adding recurrency to a Deep Q-Network is investigated by replacing the first post-convolutional fully-connected layer with a recurrent LSTM, which successfully integrates information through time and replicates DQN's performance on standard Atari games and partially observed equivalents featuring flickering game screens.

QueryPOMDP: POMDP-Based Communication in Multiagent Systems

The experimental results show that the approach successfully exploits sparse interactions: the approach can effectively identify the situations in which it is beneficial to communicate, as well as trade off the cost of communication with overall task performance.

Dueling Network Architectures for Deep Reinforcement Learning

This paper presents a new neural network architecture for model-free reinforcement learning that leads to better policy evaluation in the presence of many similar-valued actions and enables the RL agent to outperform the state-of-the-art on the Atari 2600 domain.

Prioritized Experience Replay

A framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently, in Deep Q-Networks, a reinforcement learning algorithm that achieved human-level performance across many Atari games.

Deep Reinforcement Learning with Double Q-Learning

This paper proposes a specific adaptation to the DQN algorithm and shows that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.

Coordinating multi-agent reinforcement learning with limited communication

This paper develops a learning approach that generalizes previous coordinated MARL approaches that use DCOP algorithms and enables MARL to be conducted over a spectrum from independent learning (without communication) to fully coordinated learning depending on agents' communication bandwidth.

Recurrent Reinforcement Learning: A Hybrid Approach

This work investigates a deep-learning approach to learning the representation of states in partially observable tasks, with minimal prior knowledge of the domain, and proposes a new family of hybrid models that combines the strength of both supervised learning and reinforcement learning, trained in a joint fashion.

Cooperative Multi-Agent Learning: The State of the Art

This survey attempts to draw from multi-agent learning work in a spectrum of areas, including RL, evolutionary computation, game theory, complex systems, agent modeling, and robotics, and finds that this broad view leads to a division of the work into two categories.