Andrew E. Welchman

Learn More
Processing of binocular disparity is thought to be widespread throughout cortex, highlighting its importance for perception and action. Yet the computations and functional roles underlying this activity across areas remain largely unknown. Here, we trace the neural representations mediating depth perception across human brain areas using multivariate(More)
Expertise in recognizing objects in cluttered scenes is a critical skill for our interactions in complex environments and is thought to develop with learning. However, the neural implementation of object learning across stages of visual analysis in the human brain remains largely unknown. Using combined psychophysics and functional magnetic resonance(More)
Binocular disparity, the slight differences between the images registered by our two eyes, provides an important cue when estimating the three-dimensional (3D) structure of the complex environment we inhabit. Sensitivity to binocular disparity is evident at multiple levels of the visual hierarchy in the primate brain, from early visual cortex to parietal(More)
Synchronising movements with events in the surrounding environment is an ubiquitous aspect of everyday behaviour. Often, information about a stream of events is available across sensory modalities. While it is clear that we synchronise more accurately to auditory cues than other modalities, little is known about how the brain combines multisensory signals(More)
How do we decide whether an object approaching us will hit us? The optic array provides information sufficient for us to determine the approaching trajectory of a projectile. However, when using binocular information, observers report that trajectories near the mid-sagittal plane are wider than they actually are. Here we extend this work to consider stimuli(More)
Learning is thought to facilitate the recognition of objects by optimizing the tuning of visual neurons to behaviorally relevant features. However, the learning mechanisms that shape neural selectivity for visual forms in the human brain remain essentially unknown. Here, we combine behavioral and functional magnetic resonance imaging (fMRI) measurements to(More)
Our perception of the world's three-dimensional (3D) structure is critical for object recognition, navigation and planning actions. To accomplish this, the brain combines different types of visual information about depth structure, but at present, the neural architecture mediating this combination remains largely unknown. Here, we report neuroimaging(More)
Humans exploit a range of visual depth cues to estimate three-dimensional structure. For example, the slant of a nearby tabletop can be judged by combining information from binocular disparity, texture and perspective. Behavioral tests show humans combine cues near-optimally, a feat that could depend on discriminating the outputs from cue-specific(More)
Synchronising our actions with external events is a task we perform without apparent effort. Its foundation relies on accurate temporal control that is widely accepted to take one of two different modes of implementation: explicit timing for discrete actions and implicit timing for smooth continuous movements. Here we assess synchronisation performance for(More)
The ability to synchronise actions with environmental events is a fundamental skill supporting a variety of group activities. In such situations, multiple sensory cues are usually available for synchronisation, yet previous studies have suggested that auditory cues dominate those from other modalities. We examine the control of rhythmic action on the basis(More)