Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Probabilistic Movement Primitives
This work analytically derive a stochastic feedback controller which reproduces the given trajectory distribution for robot movement control and presents a probabilistic formulation of the MP concept that maintains a distribution over trajectories.
Probabilistic Recurrent State-Space Models
This work proposes a novel model formulation and a scalable training algorithm based on doubly stochastic variational inference and Gaussian processes that allows one to fully capture the latent state temporal correlations in state-space models.
Hierarchical Relative Entropy Policy Search
This work defines the problem of learning sub-policies in continuous state action spaces as finding a hierarchical policy that is composed of a high-level gating policy to select the low-level sub-Policies for execution by the agent and treats them as latent variables which allows for distribution of the update information between the sub- policies.
Using probabilistic movement primitives in robotics
A stochastic feedback controller is derived that reproduces the encoded variability of the movement and the coupling of the degrees of freedom of the robot by using a probabilistic representation.
Learning Step Size Controllers for Robust Neural Network Training
Algorithms to automatically adapt the learning rate of neural networks (NNs) are investigated and it is shown how an adaptive controller can adjust thelearning rate without prior knowledge of the learning problem at hand.
Towards learning hierarchical skills for multi-phase manipulation tasks
- Oliver Kroemer, Christian Daniel, G. Neumann, H. V. Hoof, Jan Peters
- Computer ScienceIEEE International Conference on Robotics and…
- 26 May 2015
This paper presents an approach for exploiting the phase structure of tasks in order to learn manipulation skills more efficiently and was successfully evaluated on a real robot performing a bimanual grasping task.
Active Reward Learning
- Christian Daniel, Malte Viering, Jan Metz, Oliver Kroemer, Jan Peters
- Computer ScienceRobotics: Science and Systems
- 12 July 2014
This work introduces a framework, wherein a traditional learning algorithm interplays with the reward learning component, such that the evolution of the action learner guides the queries of the reward learner.
Probabilistic inference for determining options in reinforcement learning
- Christian Daniel, H. V. Hoof, Jan Peters, G. Neumann
- Computer ScienceMachine Learning
- 1 September 2016
The proposed approach is based on parametric option representations and works well in combination with current policy search methods, which are particularly well suited for continuous real-world tasks.
Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization
This work proposes a novel transfer learning method to obtain customized optimizers within the well-established framework of Bayesian optimization, allowing the algorithm to utilize the proven generalization capabilities of Gaussian processes.
Learning concurrent motor skills in versatile solution spaces
- Christian Daniel, G. Neumann, Jan Peters
- Computer ScienceIEEE/RSJ International Conference on Intelligent…
- 24 December 2012
This paper presents a complete framework that is capable of learning different solution strategies for a real robot Tetherball task, and simultaneously learns multiple distinct solutions for the same task, such that a partial degeneration of this solution space does not prevent the successful completion of the task.