Distributed Multi-Agent Deep Reinforcement Learning for Robust Coordination against Noise

@article{Motokawa2022DistributedMD,
  title={Distributed Multi-Agent Deep Reinforcement Learning for Robust Coordination against Noise},
  author={Yoshinari Motokawa and Toshiharu Sugawara},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.09705}
}
—In multi-agent systems, noise reduction techniques are important for improving the overall system reliability as agents are required to rely on limited environmental infor- mation to develop cooperative and coordinated behaviors with the surrounding agents. However, previous studies have often applied centralized noise reduction methods to build robust and versatile coordination in noisy multi-agent environments, while distributed and decentralized autonomous agents are more plausible for real… 

References

SHOWING 1-10 OF 21 REFERENCES

Multi-Agent Actor-Critic with Hierarchical Graph Attention Network

TLDR
This work proposes a model that conducts both representation learning for multiple agents using hierarchical graph attention network and policy learning using multi-agent actor-critic, and demonstrates that the proposed model outperforms existing methods in several mixed cooperative and competitive tasks.

NROWAN-DQN: A Stable Noisy Network with Noise Reduction and Online Weight Adjustment for Exploration

Noisy Networks for Exploration

TLDR
It is found that replacing the conventional exploration heuristics for A3C, DQN and dueling agents with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance.

Independent reinforcement learners in cooperative Markov games: a survey regarding coordination problems

TLDR
This paper identifies several challenges responsible for the non-coordination of independent agents: Pareto-selection, non-stationarity, stochasticity, alter-exploration and shadowed equilibria, and can serve as a basis for choosing the appropriate algorithm for a new domain.

Reinforcement Learning with Perturbed Rewards

TLDR
This work develops a robust RL framework that enables agents to learn in noisy environments where only perturbed rewards are observed, and shows that trained policies based on the estimated surrogate reward can achieve higher expected rewards, and converge faster than existing baselines.

Exploring Parameter Space with Structured Noise for Meta-Reinforcement Learning

TLDR
Experimental results demonstrate the superiority of ESNPS against a number of competitive baselines and utilizes meta-learning and directly uses metapolicy parameters, which contain prior knowledge, as structured noises to perturb the base model for effective exploration in new tasks.

Parameter Space Noise for Exploration

TLDR
This work demonstrates that RL with parameter noise learns more efficiently than traditional RL with action space noise and evolutionary strategies individually through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks.

Dueling Network Architectures for Deep Reinforcement Learning

TLDR
This paper presents a new neural network architecture for model-free reinforcement learning that leads to better policy evaluation in the presence of many similar-valued actions and enables the RL agent to outperform the state-of-the-art on the Atari 2600 domain.

Deep Reinforcement Learning with Double Q-Learning

TLDR
This paper proposes a specific adaptation to the DQN algorithm and shows that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.

Playing Atari with Deep Reinforcement Learning

TLDR
This work presents the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning, which outperforms all previous approaches on six of the games and surpasses a human expert on three of them.