Learn More
A sudden touch on one hand can improve vision near that hand, revealing crossmodal links in spatial attention. It is often assumed that such links involve only multimodal neural structures, but unimodal brain areas may also be affected. We tested the effect of simultaneous visuo-tactile stimulation on the activity of the human visual cortex. Tactile(More)
Speech perception can use not only auditory signals, but also visual information from seeing the speaker's mouth. The relative timing and relative location of auditory and visual inputs are both known to influence crossmodal integration psychologically, but previous imaging studies of audiovisual speech focused primarily on just temporal aspects. Here we(More)
How do we perceive the visual motion of objects that are accelerated by gravity? We propose that, because vision is poorly sensitive to accelerations, an internal model that calculates the effects of gravity is derived from graviceptive information, is stored in the vestibular cortex, and is activated by visual motion that appears to be coherent with(More)
Incoming signals from different sensory modalities are initially processed in separate brain regions. But because these different signals can arise from common events or objects in the external world, integration between them can be useful. Such integration is subject to spatial and temporal constraints, presumably because a common source is more likely for(More)
In everyday life, people untrained in formal logic draw simple deductive inferences from linguistic material (i.e., elementary propositional deductions). Presently, we have limited information on the brain areas implicated when such conclusions are drawn. We used event-related fMRI to identify these brain areas. A set of multiple and independent criteria(More)
Deduction allows us to draw consequences from previous knowledge. Deductive reasoning can be applied to several types of problem, for example, conditional, syllogistic, and relational. It has been assumed that the same cognitive operations underlie solutions to them all; however, this hypothesis remains to be tested empirically. We used event-related fMRI,(More)
Perception of movement in acoustic space depends on comparison of the sound waveforms reaching the two ears (binaural cues) as well as spectrotemporal analysis of the waveform at each ear (monaural cues). The relative importance of these two cues is different for perception of vertical or horizontal motion, with spectrotemporal analysis likely to be more(More)
Event-related functional magnetic resonance imaging was used to identify brain areas involved in spatial attention and determine whether these operate unimodally or supramodally for vision and touch. On a trial-by-trial basis, a symbolic auditory cue indicated the most likely side for the subsequent target, thus directing covert attention to one side. A(More)
Our brain continuously receives complex combinations of sounds originating from different sources and relating to different events in the external world. Timing differences between the two ears can be used to localize sounds in space, but only when the inputs to the two ears have similar spectrotemporal profiles (high binaural coherence). We used fMRI to(More)
Two identical stimuli, such as a pair of electrical shocks to the skin, are readily perceived as two separate events in time provided the interval between them is sufficiently long. However, as they are presented progressively closer together, there comes a point when the two separate stimuli are perceived as a single stimulus. Damage to posterior parietal(More)