Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Learning and generalization of motor skills by learning from demonstration
- P. Pastor, Heiko Hoffmann, T. Asfour, S. Schaal
- Computer ScienceIEEE International Conference on Robotics and…
- 12 May 2009
A general approach for learning robotic motor skills from human demonstration is provided and how this framework extends to the control of gripper orientation and finger position and the feasibility of this approach is demonstrated.
Parameter Space Noise for Exploration
This work demonstrates that RL with parameter noise learns more efficiently than traditional RL with action space noise and evolutionary strategies individually through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks.
ProMP: Proximal Meta-Policy Search
A novel meta-learning algorithm is developed that overcomes both the issue of poor credit assignment and previous difficulties in estimating meta-policy gradients and leads to superior pre-adaptation policy behavior and consistently outperforms previous Meta-RL algorithms in sample-efficiency, wall-clock time, and asymptotic performance.
Data-Driven Grasp Synthesis—A Survey
- J. Bohg, A. Morales, T. Asfour, D. Kragic
- Computer ScienceIEEE Transactions on Robotics
- 10 September 2013
A review of the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps and an overview of the different methodologies are provided, which draw a parallel to the classical approaches that rely on analytic formulations.
ARMAR-III: An Integrated Humanoid Platform for Sensory-Motor Control
- T. Asfour, K. Regenstein, R. Dillmann
- Computer Science6th IEEE-RAS International Conference on Humanoid…
- 1 December 2006
The goal of the work is to provide reliable and highly integrated humanoid platforms which on the one hand allow the implementation and tests of various research activities and on the other hand the realization of service tasks in a household scenario.
Manipulation Planning Among Movable Obstacles
- M. Stilman, Jan-Ullrich Schamburek, J. Kuffner, T. Asfour
- Computer ScienceProceedings IEEE International Conference on…
- 10 April 2007
This paper presents the resolve spatial constraints (RSC) algorithm for manipulation planning in a domain with movable obstacles and identifies methods for sampling object surfaces and generating connecting paths between grasps and placements to optimize the efficiency.
Task-Specific Generalization of Discrete and Periodic Dynamic Movement Primitives
3-D vision on humanoid robots with complex oculomotor systems is often difficult due to the modeling uncertainties, but it is shown that these uncertainties can be accounted for by the proposed approach.
Design of the TUAT/Karlsruhe humanoid hand
- N. Fukaya, S. Toyama, T. Asfour, R. Dillmann
- Computer ScienceProceedings. IEEE/RSJ International Conference…
- 31 October 2000
The mechanism and design of a new humanoid-type hand with human-like manipulation abilities is discussed and is designed for the humanoid robot ARMAR which has to work autonomously or interactively in cooperation with humans and for an artificial lightweight arm for handicapped persons.
The KIT whole-body human motion database
- Christian Mandery, Ömer Terlemez, Martin Do, N. Vahrenkamp, T. Asfour
- Computer Science, BiologyInternational Conference on Advanced Robotics…
- 27 July 2015
We present a large-scale whole-body human motion database consisting of captured raw motion data as well as the corresponding post-processed motions. This database serves as a key element for a wide…
Model-Based Reinforcement Learning via Meta-Policy Optimization
- I. Clavera, Jonas Rothfuss, J. Schulman, Yasuhiro Fujita, T. Asfour, P. Abbeel
- Computer ScienceCoRL
- 14 September 2018
This work proposes Model-Based Meta-Policy-Optimization (MB-MPO), an approach that foregoes the strong reliance on accurate learned dynamics models and uses an ensemble of learned dynamic models to create a policy that can quickly adapt to any model in the ensemble with one policy gradient step.