End-Effect Exploration Drive for Effective Motor Learning

  title={End-Effect Exploration Drive for Effective Motor Learning},
  author={E. Dauc{\'e}},
  • E. Daucé
  • Published 2020
  • Computer Science, Biology
  • ArXiv
  • End-effect drives are proposed here as an effective way to implement goal-directed motor learning, in the absence of an explicit forward model. An end-effect model relies on a simple statistical recording of the effect of the current policy, here used as a substitute for the more resource-demanding forward models. When combined with a reward structure, it forms the core of a lightweight variational free energy minimization setup. The main difficulty lies in the maintenance of this simplified… CONTINUE READING

    Figures from this paper


    Reinforcement Learning with Deep Energy-Based Policies
    • 466
    • PDF
    Taming the Noise in Reinforcement Learning via Soft Updates
    • 175
    • PDF
    Efficient Exploration via State Marginal Matching
    • 38
    • PDF
    Model-Ensemble Trust-Region Policy Optimization
    • 165
    • PDF
    Prediction and Control with Temporal Segment Models
    • 43
    • PDF
    Forward Models: Supervised Learning with a Distal Teacher
    • 1,536
    • PDF
    Reinforcement Learning: An Introduction
    • 27,388
    • PDF
    Playing Atari with Deep Reinforcement Learning
    • 4,702
    • PDF
    Unifying Count-Based Exploration and Intrinsic Motivation
    • 637
    • PDF
    Learning to Achieve Goals
    • 162