Andrew E. Welchman

Learn More
Expertise in recognizing objects in cluttered scenes is a critical skill for our interactions in complex environments and is thought to develop with learning. However, the neural implementation of object learning across stages of visual analysis in the human brain remains largely unknown. Using combined psychophysics and functional magnetic resonance(More)
Processing of binocular disparity is thought to be widespread throughout cortex, highlighting its importance for perception and action. Yet the computations and functional roles underlying this activity across areas remain largely unknown. Here, we trace the neural representations mediating depth perception across human brain areas using multivariate(More)
Binocular disparity, the slight differences between the images registered by our two eyes, provides an important cue when estimating the three-dimensional (3D) structure of the complex environment we inhabit. Sensitivity to binocular disparity is evident at multiple levels of the visual hierarchy in the primate brain, from early visual cortex to parietal(More)
Humans exploit a range of visual depth cues to estimate three-dimensional structure. For example, the slant of a nearby tabletop can be judged by combining information from binocular disparity, texture and perspective. Behavioral tests show humans combine cues near-optimally, a feat that could depend on discriminating the outputs from cue-specific(More)
Learning is thought to facilitate the recognition of objects by optimizing the tuning of visual neurons to behaviorally relevant features. However, the learning mechanisms that shape neural selectivity for visual forms in the human brain remain essentially unknown. Here, we combine behavioral and functional magnetic resonance imaging (fMRI) measurements to(More)
Determining the approach of a moving object is a vital survival skill that depends on the brain combining information about lateral translation and motion-in-depth. Given the importance of sensing motion for obstacle avoidance, it is surprising that humans make errors, reporting an object will miss them when it is on a collision course with their head. Here(More)
Our perception of the world's three-dimensional (3D) structure is critical for object recognition, navigation and planning actions. To accomplish this, the brain combines different types of visual information about depth structure, but at present, the neural architecture mediating this combination remains largely unknown. Here, we report neuroimaging(More)
The ability to synchronise actions with environmental events is a fundamental skill supporting a variety of group activities. In such situations, multiple sensory cues are usually available for synchronisation, yet previous studies have suggested that auditory cues dominate those from other modalities. We examine the control of rhythmic action on the basis(More)
Everyday behaviour involves a trade-off between planned actions and reaction to environmental events. Evidence from neurophysiology, neurology and functional brain imaging suggests different neural bases for the control of different movement types. Here we develop a behavioural paradigm to test movement dynamics for intentional versus reaction movements and(More)
In natural settings, our eyes tend to track approaching objects. To estimate motion, the brain should thus take account of eye movements, perhaps using retinal cues (retinal slip of static objects) or extra-retinal signals (motor commands). Previous work suggests that extra-retinal ocular vergence signals do not support the perceptual judgments. Here, we(More)