Corpus ID: 222134134

A Sharp Analysis of Model-based Reinforcement Learning with Self-Play

@article{Liu2020ASA,
  title={A Sharp Analysis of Model-based Reinforcement Learning with Self-Play},
  author={Qinghua Liu and Tiancheng Yu and Yu Bai and C. Jin},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.01604}
}
  • Qinghua Liu, Tiancheng Yu, +1 author C. Jin
  • Published 2020
  • Computer Science, Mathematics
  • ArXiv
  • Model-based algorithms---algorithms that decouple learning of the model and planning given the model---are widely used in reinforcement learning practice and theoretically shown to achieve optimal sample efficiency for single-agent reinforcement learning in Markov Decision Processes (MDPs). However, for multi-agent reinforcement learning in Markov games, the current best known sample complexity for model-based algorithms is rather suboptimal and compares unfavorably against recent model-free… CONTINUE READING
    1 Citations

    Tables from this paper.

    Provably Efficient Online Agnostic Learning in Markov Games

    References

    SHOWING 1-10 OF 38 REFERENCES
    Near-Optimal Reinforcement Learning with Self-Play
    • 3
    • PDF
    Provable Self-Play Algorithms for Competitive Reinforcement Learning
    • 10
    • PDF
    Online Reinforcement Learning in Stochastic Games
    • 35
    • PDF
    Nash Q-Learning for General-Sum Stochastic Games
    • 744
    • PDF
    Markov Games as a Framework for Multi-Agent Reinforcement Learning
    • 1,836
    • PDF
    Feature-Based Q-Learning for Two-Player Stochastic Games
    • 15
    • PDF
    R-MAX - A General Polynomial Time Algorithm for Near-Optimal Reinforcement Learning
    • 1,029
    • PDF