Visuomotor Coordination in Reach-To-Grasp Tasks: From Humans to Humanoids and Vice Versa

Abstract

U nderstanding the principles involved in visually-based coordinated motor control is one of the most fundamental and most intriguing research problems across a number of areas, including psychology, neuroscience, computer vision and robotics. Humans perform visually driven actions such looking at, reaching, and grasping a morning cup of co ee on a daily basis, without much e ort and still very reliably. Yet, not very much is known regarding computational functions that the central nervous system performs in order to provide a set of requirements for visually-driven reaching and grasping. Additionally, in spite of several decades of advances in the eld, the abilities of humanoids to perform similar tasks are by far modest when needed to operate in unstructured, unpredictable and dynamically changing environments. In this thesis, we are interested in studying the principles behind the transformations from the retinotopic target encoding to the representations that are used to generate eye-head and arm movements. Next, we study how the movements of the eyes, arm and hand are generated and coordinated in reach-to-grasp tasks. In addition to this, we investigate the tailoring of visual resources with respect to spatio-temporal requirements of the motor system. We start from studying the visuomotor principles in humans and monkeys and further proceed with investigating how they can be useful to robotic applications. Once we create our computational models, we are able to go in the backward direction, from robotics to neuroscience, by providing some hypotheses and predictions regarding the functions of the central nervous system. More speci cally, our rst focus is understanding the principles involved in human visuomotor coordination. Not many behavioral studies considered visuomotor coordination in natural, unrestricted, head-free movements in complex scenarios such as obstacle avoidance. To ll this gap, we provide an assessment of visuomotor coordination when humans perform prehensile tasks with obstacle avoidance, an issue that has received far less attention. Namely, we quantify the relationships between the gaze and arm-hand systems, so as to inform robotic models, and we investigate how the presence of an obstacle modulates this pattern of correlations. Second, to complement these observations, we provide a robotic model of visuomotor coordination, with and without the presence of obstacles in the workspace. The parameters of the controller are solely estimated by using the human motion capture data from our human study. This controller has a number of interesting properties. It provides an e cient way to control the gaze, arm and hand movements in a stable and coordinated manner. When facing perturbations while reaching and grasping, our controller adapts its behavior almost instantly, while preserving coordination between the gaze, arm, and hand.

26 Figures and Tables

Cite this paper

@inproceedings{Lukic2015VisuomotorCI, title={Visuomotor Coordination in Reach-To-Grasp Tasks: From Humans to Humanoids and Vice Versa}, author={Luka Lukic and Aude Billard and Jos{\'e} Rosado}, year={2015} }