Automatic guidance of visual attention from verbal working memory.

Abstract

Previous studies have shown that visual attention can be captured by stimuli matching the contents of working memory (WM). Here, the authors assessed the nature of the representation that mediates the guidance of visual attention from WM. Observers were presented with either verbal or visual primes (to hold in memory, Experiment 1; to verbalize, Experiment 2; or merely to attend, Experiment 3) and subsequently were required to search for a target among different distractors, each embedded within a colored shape. In half of the trials, an object in the search array matched the prime, but this object never contained the target. Despite this, search was impaired relative to a neutral baseline in which the prime and search displays did not match. An interesting finding is that verbal primes were effective in generating the effects, and verbalization of visual primes elicited similar effects to those elicited when primes were held in WM. However, the effects were absent when primes were only attended. The data suggest that there is automatic encoding into WM when items are verbalized and that verbal as well as visual WM can guide visual attention.

020406020072008200920102011201220132014201520162017
Citations per Year

227 Citations

Semantic Scholar estimates that this publication has 227 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Soto2007AutomaticGO, title={Automatic guidance of visual attention from verbal working memory.}, author={David Soto and Glyn W. Humphreys}, journal={Journal of experimental psychology. Human perception and performance}, year={2007}, volume={33 3}, pages={730-7} }