Learn More
Using functional magnetic resonance imaging (fMRI), we found an area in the fusiform gyrus in 12 of the 15 subjects tested that was significantly more active when the subjects viewed faces than when they viewed assorted common objects. This face activation was used to define a specific region of interest individually for each subject, within which several(More)
Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session,(More)
When 2 targets are presented among distractors in rapid serial visual presentation, correct identification of the 1st target results in a deficit for a 2nd target appearing within 200-500 ms. This attentional blink (AB; J.E. Raymond, K.L. Shapiro, & K.M. Arnell, 1992) was examined for categorically defined targets (letters among nonletters) in 7(More)
Using visual information to guide behaviour requires storage in a temporary buffer, known as visual short-term memory (VSTM), that sustains attended information across saccades and other visual interruptions. There is growing debate on whether VSTM capacity is limited to a fixed number of objects or whether it is variable. Here we report four experiments(More)
The authors examined the organization of visual short-term memory (VSTM). Using a change-detection task, they reported that VSTM stores relational information between individual items. This relational processing is mediated by the organization of items into spatial configurations. The spatial configuration of visual objects is important for VSTM of spatial(More)
The role of the hippocampus and adjacent medial temporal lobe structures in memory systems has long been debated. Here we show in humans that these neural structures are important for encoding implicit contextual information from the environment. We used a contextual cuing task in which repeated visual context facilitates visual search for embedded target(More)
Learning and memory of novel spatial configurations aids behaviors such as visual search through an implicit process called contextual cuing (M. M. Chun & Y. Jiang, 1998). The present study provides rigorous tests of the implicit nature ofcontextual cuing. Experiment 1 used a recognition test that closely matched the learning task, confirming that memory(More)
Cognitive models of attention propose that visual perception is a product of two stages of visual processing: early operations permit rapid initial categorization of the visual world, while later attention-demanding capacity-limited stages are necessary for the conscious report of the stimuli. Here we used the attentional blink paradigm and fMRI to neurally(More)
Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using a(More)