• Publications
  • Influence
Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors
TLDR
Dynamical movement primitives is presented, a line of research for modeling attractor behaviors of autonomous nonlinear dynamical systems with the help of statistical learning techniques, and its properties are evaluated in motor control and robotics. Expand
Locally Weighted Learning
TLDR
The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning fit parameters, and applications of locally weighted learning. Expand
Natural Actor-Critic
This paper investigates a novel model-free reinforcement learning architecture, the Natural Actor-Critic. The actor updates are based on stochastic policy gradients employing Amari's natural gradientExpand
Learning Attractor Landscapes for Learning Motor Primitives
TLDR
By nonlinearly transforming the canonical attractor dynamics using techniques from nonparametric regression, almost arbitrary new nonlinear policies can be generated without losing the stability properties of the canonical system. Expand
Incremental Online Learning in High Dimensions
TLDR
To the authors' knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces. Expand
Movement imitation with nonlinear dynamical systems in humanoid robots
TLDR
The results demonstrate that multi-joint human movements can be encoded successfully by the CPs, that a learned movement policy can readily be reused to produce robust trajectories towards different targets, and that the parameter space which encodes a policy is suitable for measuring to which extent two trajectories are qualitatively similar. Expand
STOMP: Stochastic trajectory optimization for motion planning
TLDR
It is experimentally show that the stochastic nature of STOMP allows it to overcome local minima that gradient-based methods like CHOMP can get stuck in. Expand
Reinforcement learning of motor skills with policy gradients
TLDR
This paper examines learning of complex motor skills with human-like limbs, and combines the idea of modular motor control by means of motor primitives as a suitable way to generate parameterized control policies for reinforcement learning with the theory of stochastic policy gradient learning. Expand
Constructive Incremental Learning from Only Local Information
TLDR
A constructive, incremental learning system for regression problems that models data by means of spatially localized linear models that can allocate resources as needed while dealing with the bias-variance dilemma in a principled way is introduced. Expand
A Generalized Path Integral Control Approach to Reinforcement Learning
TLDR
The framework of stochastic optimal control with path integrals is used to derive a novel approach to RL with parameterized policies to demonstrate interesting similarities with previous RL research in the framework of probability matching and provides intuition why the slightly heuristically motivated probability matching approach can actually perform well. Expand
...
1
2
3
4
5
...