• Publications
  • Influence
Probabilistic Movement Primitives
TLDR
This work analytically derive a stochastic feedback controller which reproduces the given trajectory distribution for robot movement control and presents a probabilistic formulation of the MP concept that maintains a distribution over trajectories. Expand
Using probabilistic movement primitives in robotics
TLDR
A stochastic feedback controller is derived that reproduces the encoded variability of the movement and the coupling of the degrees of freedom of the robot by using a probabilistic representation. Expand
A probabilistic approach to robot trajectory generation
TLDR
A probabilistic movement primitive approach that overcomes the limitations of existing approaches that allows performing new operations - a product of distributions for the co-activation of MPs conditioning for generalizing the MP to different desired targets. Expand
Model-based imitation learning by probabilistic trajectory matching
TLDR
This paper proposes to learn probabilistic forward models to compute a probability distribution over trajectories, and compares the approach to model-based reinforcement learning methods with hand-crafted cost functions and evaluates the method with experiments on a real compliant robot. Expand
Probabilistic model-based imitation learning
TLDR
This work proposes to learn a probabilistic model of the system, which is exploited for mental rehearsal of the current controller by making predictions about future trajectories, and learns a robot-specific controller that directly matches robot trajectories with observed ones. Expand
Probabilistic movement primitives under unknown system dynamics
TLDR
This work presents a reformulation of the ProMPs that allows accurate reproduction of the skill without modeling the system dynamics and derives a variable-stiffness controller in closed form that reproduces the trajectory distribution and the interaction forces present in the demonstrations. Expand
Extracting low-dimensional control variables for movement primitives
TLDR
This paper uses hierarchical Bayesian models (HBMs) to estimate a low dimensional latent variable model for probabilistic movement primitives (ProMPs), which is a recent movement primitive representation and extends the HBM by a mixture model, such that it can model different movement types in the same dataset. Expand
Sample-based informationl-theoretic stochastic optimal control
TLDR
This work reuse inspiration from the reinforcement learning community and relax the greedy operator used in SOC with an information theoretic bound that limits the `distance' of two subsequent trajectory distributions in a policy update, which ensures a smooth and stable policy update. Expand
Reinforcement learning vs human programming in tetherball robot games
TLDR
This paper creates a motor learning framework consisting of state-of-the-art components in motor skill learning and compares it to a manually designed program on the task of robot tetherball. Expand
Learning modular policies for robotics
TLDR
This paper introduces new policy search algorithms that are based on information-theoretic principles and are able to learn to select, adapt and sequence the building blocks and developed a new representation for the individual building block that supports co-activation and principled ways for adapting the movement. Expand
...
1
2
3
...