• Computer Science, Mathematics
  • Published in ArXiv 2019

Attraction-Repulsion Actor-Critic for Continuous Control Reinforcement Learning

@article{Doan2019AttractionRepulsionAF,
  title={Attraction-Repulsion Actor-Critic for Continuous Control Reinforcement Learning},
  author={Thang Doan and Bogdan Mazoure and Audrey Durand and Joelle Pineau and R. Devon Hjelm},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.07543}
}
Continuous control tasks in reinforcement learning are important because they provide an important framework for learning in high-dimensional state spaces with deceptive rewards, where the agent can easily become trapped into suboptimal solutions. One way to avoid local optima is to use a population of agents to ensure coverage of the policy space, yet learning a population with the "best" coverage is still an open problem. In this work, we present a novel approach to population-based RL in… CONTINUE READING

Figures, Tables, and Topics from this paper.

References

Publications referenced by this paper.
SHOWING 1-10 OF 31 REFERENCES

MuJoCo: A physics engine for model-based control

  • 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems
  • 2012
VIEW 7 EXCERPTS
HIGHLY INFLUENTIAL

Markov Decision Processes: Discrete Stochastic Dynamic Programming

  • Wiley Series in Probability and Statistics
  • 1994
VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL