Corpus ID: 219636339

Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning

@article{Christianos2020SharedEA,
  title={Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning},
  author={Filippos Christianos and Lukas Sch{\"a}fer and Stefano V. Albrecht},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.07169}
}
  • Filippos Christianos, Lukas Schäfer, Stefano V. Albrecht
  • Published 2020
  • Computer Science
  • ArXiv
  • Exploration in multi-agent reinforcement learning is a challenging problem, especially in environments with sparse rewards. We propose a general method for efficient exploration by sharing experience amongst agents. Our proposed algorithm, called Shared Experience Actor-Critic (SEAC), applies experience sharing in an actor-critic framework. We evaluate SEAC in a collection of sparse-reward multi-agent environments and find that it consistently outperforms two baselines and two state-of-the-art… CONTINUE READING

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 30 REFERENCES
    Identifying Content Patterns in Peer Reviews Using Graph-based Cohesion
    4
    Reinforcer probability, reinforcer magnitude, and the reinforcement context for remembering.
    19
    Brief Report: Screening Tool for Autism in Two-Year-Olds (STAT): Development and Preliminary Data
    301
    Reproductive health in central and eastern Europe: priorities and needs.
    12
    Gigahertz-band high-gain low-noise AGC amplifiers in fine-line NMOS
    73
    Effect of nodes reordering on the schedulability of real-time messages in timed token networks
    2