A saliency-based search mechanism for overt and covert shifts of visual attention

@article{Itti2000ASS,
  title={A saliency-based search mechanism for overt and covert shifts of visual attention},
  author={Laurent Itti and Christof Koch},
  journal={Vision Research},
  year={2000},
  volume={40},
  pages={1489-1506}
}
  • L. Itti, C. Koch
  • Published 1 June 2000
  • Psychology, Medicine
  • Vision Research
Most models of visual search, whether involving overt eye movements or covert shifts of attention, are based on the concept of a saliency map, that is, an explicit two-dimensional map that encodes the saliency or conspicuity of objects in the visual environment. Competition among neurons in this map gives rise to a single winning location that corresponds to the next attended target. Inhibiting this location automatically allows the system to attend to the next most salient location. We… 
Visual saliency and spike timing in the ventral visual pathway
  • R. VanRullen
  • Medicine, Psychology
    Journal of Physiology-Paris
  • 2003
TLDR
This work argues against this classical view that visual "bottom-up" saliency automatically recruits the attentional system prior to object recognition, and suggests that such an implicit representation of saliency can be best encoded in the relative times of the first spikes fired in a given neuronal population.
A Computational Model of Saliency Map Read-Out during Visual Search
TLDR
A new computational model for the inhibition of return is proposed, which is able to examine priority or saliency map in a manner consistent with psychophysical findings and can be considered as a neural implementation of the episodic theory of attention.
A bottom up visual saliency map in the primary visual cortex, theory and its experimental tests
TLDR
The theoretical proposal that the primary visual cortex (V1) creates a saliency map of the visual space, such that the receptive field location of the most responsive V1 neuron to a scene is most likely selected for attentional processing is presented.
Feature combination strategies for saliency-based visual attention systems
  • L. Itti, C. Koch
  • Mathematics, Computer Science
    J. Electronic Imaging
  • 2001
TLDR
Four combination strategies are compared using three databases of natural color images and it is found that strategy (4) and its simplified, computationally efficient approximation yielded significantly better performance than (1), with up to fourfold improvement, while preserving generality.
CHAPTER 65 – Specifying the Components of Attention in a Visual Search Task
Although commonly treated as a unitary process, attention is more likely a collection of task-related but separable operations. Three components of attention (set, selection, and movement) are
Computational modelling of visual attention
TLDR
Five important trends have emerged from recent work on computational models of focal visual attention that emphasize the bottom-up, image-based control of attentional deployment, providing a framework for a computational and neurobiological understanding of visual attention.
Neural mechanisms of bottom-up selection during visual search
  • K. Thompson
  • Psychology
    2001 Conference Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society
  • 2001
Models of attention and saccade target selection propose that within the brain there is a topographic map of visual salience that selects, through a winner-take-all mechanism, locations for further
A computational dynamical model of human visual cortex for visual search and feature-based attention
TLDR
The research hypothesis is that biased competition occurs independently for each cued feature, and is implemented by lateral inhibition between a feedforward and a feedback network through a cortical micro-circuit architecture.
A model of top-down attentional control during visual search in complex scenes.
TLDR
A top-down model of visual attention during search in complex scenes based on similarity between the target and regions of the search scene is devised and the amount of attentional guidance across visual feature dimensions is predicted by a previously introduced informativeness measure.
What stands out in a scene? A study of human explicit saliency judgment
TLDR
It is concluded that fixations agree with saliency judgments, and classic bottom-up saliency models explain both, and computational models specifically designed for fixation prediction slightly outperform models designed for salient object detection over both types of data.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 125 REFERENCES
Comparison of feature combination strategies for saliency-based visual attention systems
TLDR
This work studies the problem of combining feature maps, from different visual modalities and with unrelated dynamic ranges, into a unique saliency map, and indicates that strategy (4) and its simplified, computationally efficient approximation yielded significantly better performance than (1), with up to 4-fold improvement, while preserving generality.
Shifts in selective visual attention: towards the underlying neural circuitry.
TLDR
This study addresses the question of how simple networks of neuron-like elements can account for a variety of phenomena associated with this shift of selective visual attention and suggests a possible role for the extensive back-projection from the visual cortex to the LGN.
Modeling Visual Attention via Selective Tuning
TLDR
This model is a hypothesis for primate visual attention, but it also outperforms existing computational solutions for attention in machine vision and is highly appropriate to solving the problem in a robot vision system.
Visual motion and attentional capture
TLDR
It is argued that when motion segregates a perceptual element from a perceptual group, a new perceptual object is created, and this event captures attention, suggesting that motion as such does not capture attention but that the appearance of anew perceptual object does.
The guidance of eye movements during active visual search
TLDR
An analysis of monkey eye movements in classic conjunction and feature search tasks was made and saccade targeting data suggest that color feature selection can apparently block the distracting effects of color unique distractors during search.
The representation of visual salience in monkey parietal cortex
TLDR
The results show that under ordinary circumstances the entire visual world is only weakly represented in LIP, with only the most salient or behaviourally relevant objects being strongly represented.
Control of Selective Visual Attention: Modeling the Where Pathway
TLDR
This work presents a model for the control of the focus of attention in primates, based on a saliency map, which is not only expected to model the functionality of biological vision but also to be essential for the understanding of complex scenes in machine vision.
Neural mechanisms of selective visual attention.
TLDR
The two basic phenomena that define the problem of visual attention can be illustrated in a simple example and selectivity-the ability to filter out un­ wanted information is illustrated.
Attention activates winner-take-all competition among visual filters
TLDR
This model predicts that the effects of attention on visual cortical neurons include increased contrast gain as well as sharper tuning to orientation and spatial frequency.
A feature-integration theory of attention
TLDR
A new hypothesis about the role of focused attention is proposed, which offers a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.
...
1
2
3
4
5
...