Corpus ID: 202539518

Bi-level Actor-Critic for Multi-agent Coordination

@article{Zhang2019BilevelAF,
  title={Bi-level Actor-Critic for Multi-agent Coordination},
  author={Haifeng Zhang and Weizhe Chen and Zeren Huang and Minne Li and Yaodong Yang and Weinan Zhang and Jianfeng Wang},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.03510}
}
  • Haifeng Zhang, Weizhe Chen, +4 authors Jianfeng Wang
  • Published in AAAI 2019
  • Computer Science
  • ArXiv
  • Coordination is one of the essential problems in multi-agent systems. Typically multi-agent reinforcement learning (MARL) methods treat agents equally and the goal is to solve the Markov game to an arbitrary Nash equilibrium (NE) when multiple equilibra exist, thus lacking a solution for NE selection. In this paper, we treat agents \emph{unequally} and consider Stackelberg equilibrium as a potentially better convergence point than Nash equilibrium in terms of Pareto superiority, especially in… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 56 REFERENCES

    O

    • R. Lowe, Y. Wu, A. Tamar, J. Harb, Abbeel
    • P.; and Mordatch, I.
    • 2017
    VIEW 9 EXCERPTS
    HIGHLY INFLUENTIAL

    A

    • V. Mnih, K. Kavukcuoglu, +6 authors Fidjeland
    • K.; Ostrovski, G.; et al.
    • 2015
    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    Asymmetric multiagent reinforcement learning

    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL

    M

    • J. Hu, Wellman
    • P.
    • 2003
    VIEW 9 EXCERPTS
    HIGHLY INFLUENTIAL

    Markov Games as a Framework for Multi-Agent Reinforcement Learning

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Bilevel optimization : theory , algorithms and applications

    • S. Dempe
    • 2018
    VIEW 3 EXCERPTS
    HIGHLY INFLUENTIAL

    Nash Q-Learning for General-Sum Stochastic Games

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL