• Publications
  • Influence
Probabilistic Movement Primitives
TLDR
This work analytically derive a stochastic feedback controller which reproduces the given trajectory distribution for robot movement control and presents a probabilistic formulation of the MP concept that maintains a distribution over trajectories.
Probabilistic Recurrent State-Space Models
TLDR
This work proposes a novel model formulation and a scalable training algorithm based on doubly stochastic variational inference and Gaussian processes that allows one to fully capture the latent state temporal correlations in state-space models.
Hierarchical Relative Entropy Policy Search
TLDR
This work defines the problem of learning sub-policies in continuous state action spaces as finding a hierarchical policy that is composed of a high-level gating policy to select the low-level sub-Policies for execution by the agent and treats them as latent variables which allows for distribution of the update information between the sub- policies.
Using probabilistic movement primitives in robotics
TLDR
A stochastic feedback controller is derived that reproduces the encoded variability of the movement and the coupling of the degrees of freedom of the robot by using a probabilistic representation.
Learning Step Size Controllers for Robust Neural Network Training
TLDR
Algorithms to automatically adapt the learning rate of neural networks (NNs) are investigated and it is shown how an adaptive controller can adjust thelearning rate without prior knowledge of the learning problem at hand.
Towards learning hierarchical skills for multi-phase manipulation tasks
TLDR
This paper presents an approach for exploiting the phase structure of tasks in order to learn manipulation skills more efficiently and was successfully evaluated on a real robot performing a bimanual grasping task.
Active Reward Learning
TLDR
This work introduces a framework, wherein a traditional learning algorithm interplays with the reward learning component, such that the evolution of the action learner guides the queries of the reward learner.
Probabilistic inference for determining options in reinforcement learning
TLDR
The proposed approach is based on parametric option representations and works well in combination with current policy search methods, which are particularly well suited for continuous real-world tasks.
Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization
TLDR
This work proposes a novel transfer learning method to obtain customized optimizers within the well-established framework of Bayesian optimization, allowing the algorithm to utilize the proven generalization capabilities of Gaussian processes.
Learning concurrent motor skills in versatile solution spaces
TLDR
This paper presents a complete framework that is capable of learning different solution strategies for a real robot Tetherball task, and simultaneously learns multiple distinct solutions for the same task, such that a partial degeneration of this solution space does not prevent the successful completion of the task.
...
...