Five factors that guide attention in visual search

@article{Wolfe2017FiveFT,
  title={Five factors that guide attention in visual search},
  author={J. Wolfe and T. Horowitz},
  journal={Nature Human Behaviour},
  year={2017},
  volume={1}
}
How do we find what we are looking for? Even when the desired target is in the current field of view, we need to search because fundamental limits on visual processing make it impossible to recognize everything at once. Searching involves directing attention to objects that might be the target. This deployment of attention is not random. It is guided to the most promising items and locations by five factors discussed here: bottom-up salience, top-down feature guidance, scene structure and… Expand

Figures from this paper

Prior target locations attract overt attention during search
TLDR
Measurements of shifts of gaze suggest that memories of recent experience can powerfully influence attentional allocation and suggest that location priming effectively disrupted attentional guidance to the search target. Expand
Guided Search 6.0: An updated model of visual search.
  • J. Wolfe
  • Psychology, Medicine
  • Psychonomic bulletin & review
  • 2021
This paper describes Guided Search 6.0 (GS6), a revised model of visual search. When we encounter a scene, we can see something everywhere. However, we cannot recognize more than a few items at aExpand
Measuring the time course of selection during visual search
TLDR
Results indicate that observers could not make instantaneous use of color information to guide the search, even when they knew which two colors would be appearing on every trial. Expand
Dwelling on simple stimuli in visual search
TLDR
Simple stimuli with varying degrees of target-distractor similarity were used and the results imply that visual search models should not treat dwelling and revisiting as constants across varying levels of search efficiency and that behavioral search experiments are equivocal with respect to the responsible processing mechanisms underlying more versus less efficient search. Expand
Visual search within working memory.
TLDR
The similarity in the mechanisms that select and update working memory to those that guide attention during perception, such as in visual search are explored, suggesting a common coding and selection scheme for working memory and perceptual representations. Expand
Memory shapes visual search strategies in large-scale environments
TLDR
By manipulating target location, it is demonstrated that search depends on episodic spatial memory as well as learnt spatial priors, and suggests that spatial memory of the global structure allows a search strategy that involves efficient attention allocation based on the relevance of scene regions. Expand
Temporal organization of color and shape processing during visual search
The mechanisms guiding visual attention are of great interest within cognitive and perceptual psychology. Many researchers have proposed models of these mechanisms, which serve to both formalizeExpand
What pops out for you pops out for fish: Four common visual features.
TLDR
It is confirmed that color, size, orientation, and motion are efficient visual features to guide attention in the archerfish in a manner comparable to humans and suggest universality in the way visual search is carried out by animals despite very different brain anatomies and living environments. Expand
Categorical templates are more useful when features are consistent: Evidence from eye movements during search for societally important vehicles
TLDR
The results of this investigation suggest that when features of a category are consistent and predictable, searchers can create mental representations that allow for the efficient guidance and restriction of attention as well as swift object identification. Expand
Learning efficient visual search for stimuli containing diagnostic spatial configurations and color-shape conjunctions
TLDR
It is shown that both targets and distractors are learned, and that reversing learned target and distractor identities impairs performance, which suggests conjunction learning involving such stimuli might be an emergent phenomenon that reflects multiple different learning processes. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 166 REFERENCES
Setting up the target template in visual search.
TLDR
It is found that when the cue matches the target exactly, search speed increases and the slope of response time-set size function decreases, and that the template set-up process uses detailed visual information, rather than schematic or semantic information, to find the target. Expand
How fast can you change your mind? The speed of top-down guidance in visual search
Most laboratory visual search tasks involve many searches for the same target, while in the real world we typically change our target with each search (e.g. find the coffee cup, then the sugar). HowExpand
The role of memory for visual search in scenes
TLDR
Work on semantic memory, episodic scene memory and implicit memory for repeated item configurations can facilitate search in artificial displays are reviewed, with special emphasis on the role of memory in guiding search in organized, real‐world scenes. Expand
Fur in the midst of the waters: visual search for material type is inefficient.
A limited set of attributes can guide visual selective attention. Thus, it is possible to deploy attention to an item defined by an appropriate color, size, or orientation but not to a specific typeExpand
Are summary statistics enough? Evidence for the importance of shape in guiding visual search
TLDR
It is concluded that summary statistics must include some global shape information to approximate the peripheral information used during search, by asking whether search performance differed between targets and statistically-matched visualizations of the same targets. Expand
Combining top-down processes to guide eye movements during real-world scene search.
TLDR
This work manipulated independently the specificity of the search target template and the usefulness of contextual constraint in an object search task to investigate how the visual system combines multiple types of top-down information to facilitate search. Expand
A saliency-based search mechanism for overt and covert shifts of visual attention
TLDR
A detailed computer implementation of a saliency map scheme is described, focusing on the problem of combining information across modalities, here orientation, intensity and color information, in a purely stimulus-driven manner, which is applied to common psychophysical stimuli as well as to a very demanding visual search task. Expand
Searching in the dark: Cognitive relevance drives attention in real-world scenes
TLDR
A cognitive relevance framework is outlined to account for the control of attention and fixation in scenes, with participants much more likely to look to the targets than to the salient regions in search. Expand
Does apparent size capture attention in visual search? Evidence from the Muller-Lyer illusion.
TLDR
The present experiment has demonstrated for the first time that apparent size can capture attention and, thus, provide bottom-up guidance on the basis of perceived salience. Expand
Top-Down Attentional Guidance Based on Implicit Learning of Visual Covariation
The visual environment is extremely rich and complex, producing information overload for the visual system. But the environment also embodies structure in the form of redundancies and regularitiesExpand
...
1
2
3
4
5
...