Corpus ID: 14350396

Sparse Latent Space Policy Search

  title={Sparse Latent Space Policy Search},
  author={K. Luck and J. Pajarinen and Erik Berger and V. Kyrki and H. B. Amor},
Computational agents often need to learn policies that involve many control variables, e.g., a robot needs to control several joints simultaneously. Learning a policy with a high number of parameters, however, usually requires a large number of training samples. We introduce a reinforcement learning method for sample-efficient policy search that exploits correlations between control variables. Such correlations are particularly frequent in motor skill learning tasks. The introduced method uses… Expand
Variational Policy Search using Sparse Gaussian Process Priors for Learning Multimodal Optimal Actions
Multimodal Policy Search using Overlapping Mixtures of Sparse Gaussian Process Prior
Latent Space Reinforcement Learning for Steering Angle Prediction
Motor Synergy Development in High-Performing Deep Reinforcement Learning Algorithms
Extracting bimanual synergies with reinforcement learning
  • K. Luck, H. B. Amor
  • Computer Science
  • 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2017
xtracting Bimanual Synergies with Reinforcement Learning
From the Lab to the Desert: Fast Prototyping and Learning of Robot Locomotion
Bi-manual Learning for a Basketball Playing Robot


Latent space policy search for robotics
Using dimensionality reduction to exploit constraints in reinforcement learning
Policy search for motor primitives in robotics
Variational Inference for Policy Search in changing situations
Learning omnidirectional path following using dimensionality reduction
Natural Actor-Critic
Towards Motor Skill Learning for Robotics
Probabilistic inference for solving discrete and continuous state Markov Decision Processes
Robot trajectory optimization using approximate inference
Black-Box Policy Search with Probabilistic Programs