Learning Optimal Gaze Decomposition


When the head is free to move, subjects frequently engage in coordinated head and eye movements to bring a target object to the fovea. Freedman and Sparks [2] found that the relative contributions of head and eye movements to the total gaze shift are non-linear functions of the initial eye position and the total gaze displacement. Freedman [1] and Wang and colleagues [8], [9] have recently proposed descriptive mathematical models for the decomposition of total gaze shift into head and eye movements. It is however an open question a) if and how this decomposition can be seen as resulting from an optimality principle, b) if this decomposition strategy is learned and c) if so, what learning mechanisms are responsible for its acquisition. We show that the rather complex behaviorally observed gaze decomposition can be understood as the result of optimizing a simple cost function. We propose a simple model for the simultaneous learning of the calibration of goal directed head/eye movements and the optimal gaze shift decomposition based on a reinforcement learning mechanism [7]. In our model, the cerebellum plays a key role in learning a gaze shift decomposition that accurately brings the desired target to the fovea while at the same time minimizing this cost function. Our model is roughly consistent with the known anatomy of oculomotor control systems. The model learns gaze shift decompositions observed experimentally and makes a number of testable predictions. The model is also implemented and tested in an anthropomorphic robot head that autonomously learns to calibrate its gaze shifts.

Cite this paper

@inproceedings{Wiewiora2003LearningOG, title={Learning Optimal Gaze Decomposition}, author={Eric Wiewiora and Jochen Triesch and Tomonori Hashiyama}, year={2003} }