Asynchronous Episodic Deep Deterministic Policy Gradient: Towards Continuous Control in Computationally Complex Environments

@article{Zhang2019AsynchronousED,
  title={Asynchronous Episodic Deep Deterministic Policy Gradient: Towards Continuous Control in Computationally Complex Environments},
  author={Zhizheng Zhang and J. Chen and Zhibo Chen and W. Li},
  journal={IEEE transactions on cybernetics},
  year={2019}
}
  • Zhizheng Zhang, J. Chen, +1 author W. Li
  • Published 2019
  • Medicine, Mathematics, Computer Science
  • IEEE transactions on cybernetics
  • Deep deterministic policy gradient (DDPG) has been proved to be a successful reinforcement learning (RL) algorithm for continuous control tasks. However, DDPG still suffers from data insufficiency and training inefficiency, especially, in computationally complex environments. In this article, we propose asynchronous episodic DDPG (AE-DDPG), as an expansion of DDPG, which can achieve more effective learning with less training time required. First, we design a modified scheme for data collection… CONTINUE READING
    Competitive and Cooperative Heterogeneous Deep Reinforcement Learning

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 36 REFERENCES
    Uncertainty-driven Imagination for Continuous Deep Reinforcement Learning
    47
    Continuous control with deep reinforcement learning
    3306
    Episodic Memory Deep Q-Networks
    10
    Accelerated Methods for Deep Reinforcement Learning
    39
    Distributed Distributional Deterministic Policy Gradients
    121
    Parameter Space Noise for Exploration
    266
    Asynchronous Methods for Deep Reinforcement Learning
    3126
    Proximal Policy Optimization Algorithms
    2497
    Prioritized Experience Replay
    1201