CPG-ACTOR: Reinforcement Learning for Central Pattern Generators

@inproceedings{Campanaro2021CPGACTORRL,
  title={CPG-ACTOR: Reinforcement Learning for Central Pattern Generators},
  author={Luigi Campanaro and Siddhant Gangapurwala and D. Martini and Wolfgang Xaver Merkt and Ioannis Havoutis},
  booktitle={TAROS},
  year={2021}
}
Central Pattern Generators (CPGs) have several properties desirable for locomotion: they generate smooth trajectories, are robust to perturbations and are simple to implement. Although conceptually promising, we argue that the full potential of CPGs has so far been limited by insufficient sensory-feedback information. This paper proposes a new methodology that allows tuning CPG controllers through gradient-based optimisation in a Reinforcement Learning (RL) setting. To the best of our knowledge… 

References

SHOWING 1-10 OF 29 REFERENCES
Reinforcement learning for a biped robot based on a CPG-actor-critic method
Hierarchical reinforcement learning and central pattern generators for modeling the development of rhythmic manipulation skills
TLDR
A computational bio-inspired model based on a hierarchical reinforcement-learning actor-critic model that searches the parameters of a set of central pattern generators having different degrees of sophistication is proposed to investigate the development of functional rhythmic hand skills from initially unstructured movements.
RLOC: Terrain-Aware Legged Locomotion using Reinforcement Learning and Optimal Control
TLDR
A unified model-based and data-driven approach for quadrupedal planning and control to achieve dynamic locomotion over uneven terrain using on-board proprioceptive and exteroceptive feedback and a reinforcement learning policy trained over a wide range of procedurally generated terrains.
Learning robot gait stability using neural networks as sensory feedback function for Central Pattern Generators
TLDR
This paper uses a neural network to represent sensory feedback inside the CPG dynamics to learn a model-free feedback controller for locomotion and balance control of a compliant quadruped robot walking on rough terrain.
Learning quadrupedal locomotion over challenging terrain
TLDR
The presented work indicates that robust locomotion in natural environments can be achieved by training in simple domains.
Adaptation to environmental change using reinforcement learning for robotic salamander
TLDR
It is verified that the robotic salamander can smoothly move toward a desired target by adapting to the environmental change from the firm ground to the mud, and the gradual improvement in the stability of learning algorithm is confirmed through the simulations.
Learning agile and dynamic motor skills for legged robots
TLDR
This work introduces a method for training a neural network policy in simulation and transferring it to a state-of-the-art legged system, thereby leveraging fast, automated, and cost-effective data generation schemes.
Reinforcement learning of single legged locomotion
TLDR
This paper presents the application of reinforcement learning to improve the performance of highly dynamic single legged locomotion with compliant series elastic actuators and presents a method to learn time-independent control policies and apply it to improved the energetic efficiency of periodic hopping.
Learning, planning, and control for quadruped locomotion over challenging terrain
TLDR
A floating-base inverse dynamics controller that allows for robust, compliant locomotion over unperceived obstacles and the generalization ability of this controller is demonstrated by presenting results from testing performed by an independent external test team on terrain that has never been shown to us.
...
...