Learn More
Establishing a coherent internal reference frame for visuospatial representation and maintaining the integrity of this frame during eye movements are thought to be crucial for both perception and motor control. A stable headcentric representation could be constructed by internally comparing retinal signals with eye position. Alternatively, visual memory(More)
The aim of this study was to: (1) quantify errors in open-loop pointing toward a spatially central (but retinally peripheral) visual target with gaze maintained in various eccentric horizontal, vertical, and oblique directions; and (2) determine the computational source of these errors. Eye and arm orientations were measured with the use of search coils(More)
Eye-hand coordination requires the brain to integrate visual information with the continuous changes in eye, head, and arm positions. This is a geometrically complex process because the eyes, head, and shoulder have different centers of rotation. As a result, head rotation causes the eye to translate with respect to the shoulder. The present study examines(More)
This study addressed the question of how the three-dimensional (3-D) control strategy for the upper arm depends on what the forearm is doing. Subjects were instructed to point a laser-attached in line with the upper arm-toward various visual targets, such that two-dimensional (2-D) pointing directions of the upper arm were held constant across different(More)
The saccade generator updates memorized target representations for saccades during eye and head movements. Here, we tested if proprioceptive feedback from the arm can also update handheld object locations for saccades, and what intrinsic coordinate system(s) is used in this transformation. We measured radial saccades beginning from a central light-emitting(More)
Eye-hand coordination is geometrically complex. To compute the location of a visual target relative to the hand, the brain must consider every anatomical link in the chain from retinas to fingertips. Here we focus on the first three links, studying how the brain handles information about the angles of the two eyes and the head. It is known that people, even(More)
Most models of spatial vision and visuomotor control reconstruct visual space by adding a vector representing the site of retinal stimulation to another vector representing gaze angle. However, this scheme fails to account for the curvatures in retinal projection produced by rotatory displacements in eye orientation. In particular, our simulations(More)
This research explored specific contextual cues that might facilitate human motor learning. Using a dual adaptation task, humans performed manual reaches to visual targets while experiencing a 30° clockwise or counterclockwise rotation, which randomly alternated between trials, of a seen cursor representing their unseen hand. Groups had different cues to(More)
To point or reach to a visual target, you need to know its direction relative to your shoulder. That direction can be computed from the retinal image, if the brain also knows the orientation of the eyeball, head, and clavicle. To be geometrically exact, the neural computation would have to involve rotary operations. 1 When the eye was turned 30° up, for(More)
Posterior parietal cortex (PPC) has been implicated in the integration of visual and proprioceptive information for the planning of action. We previously reported that single-pulse transcranial magnetic stimulation (TMS) over dorsal-lateral PPC perturbs the early stages of spatial processing for memory-guided reaching. However, our data did not distinguish(More)