What do saliency models predict?

@article{Koehler2014WhatDS,
  title={What do saliency models predict?},
  author={Kathryn Koehler and Fei Guo and Shenmin Zhang and Miguel P. Eckstein},
  journal={Journal of vision},
  year={2014},
  volume={14 3},
  pages={
          14
        }
}
Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments… 
Reconciling Saliency and Object Center-Bias Hypotheses in Explaining Free-Viewing Fixations
TLDR
A simple combined model of low-level saliency and object center bias that outperforms each individual component significantly over the authors' data, as well as on the Object and Semantic Images and Eye-tracking data set by Xu et al.
Visual Saliency Prediction and Evaluation across Different Perceptual Tasks
TLDR
Novel benchmarking results and methods are presented, a new performance baseline for perceptual tasks that provide an alternative window into visual saliency is established, and the capacity for saliency to serve in approximating human behaviour for one visual task given data from another is demonstrated.
Predicting Goal-Directed Human Attention Using Inverse Reinforcement Learning
TLDR
The first inverse reinforcement learning (IRL) model to learn the internal reward function and policy used by humans during visual search is proposed, modeled the viewer's internal belief states as dynamic contextual belief maps of object locations, and recovered distinctive target-dependent patterns of object prioritization.
What is a Salient Object? A Dataset and a Baseline Model for Salient Object Detection
  • A. Borji
  • Computer Science
    IEEE Transactions on Image Processing
  • 2015
TLDR
This work takes an in-depth look at the problem of salient object detection by studying the relationship between where people look in scenes and what they choose as the most salient object when they are explicitly asked, and suggests that themost salient object is the one that attracts the highest fraction of fixations.
Weighting the factors affecting attention guidance during free viewing and visual search: The unexpected role of object recognition uncertainty
The factors determining how attention is allocated during visual tasks have been studied for decades, but few studies have attempted to model the weighting of several of these factors within and
Oculomotor behavior during non-visual tasks: The role of visual saliency
TLDR
In the presence of a rich visual environment, visual exploration is evident even when there is no explicit instruction to explore, consistent with the view that the non-visual task is the equivalent of a dual-task: it combines the instructed task with an uninstructed, perhaps even mandatory, exploratory behavior.
Passive attention in artificial neural networks predicts human visual selectivity
TLDR
It is shown that passive attention techniques reveal a significant overlap with human visual selectivity estimates derived from 6 distinct behavioral tasks including visual discrimination, spatial localization, recognizability, free-viewing, cued-object search, and saliency search fixations.
Salience-based object prioritization during active viewing of naturalistic scenes in young and older adults
TLDR
Reconciling the salience view with the object view, it is suggested that visual salience contributes to prioritization among objects, and the data point towards an increasing relevance of object-bound information with increasing age.
Salient in space, salient in time: Fixation probability predicts fixation duration during natural scene viewing.
TLDR
It is demonstrated that fixation probability (empirical salience) predicts fixation duration across different observers and tasks, even if stimuli are deprived of low-level images features, as long as higher level scene structure remains intact.
...
...

References

SHOWING 1-10 OF 108 REFERENCES
Objects predict fixations better than early saliency.
TLDR
The eye position of human observers is measured while they inspect photographs of common natural scenes to suggest that early saliency has only an indirect effect on attention, acting through recognized objects.
What stands out in a scene? A study of human explicit saliency judgment
Visual saliency and semantic incongruency influence eye movements when inspecting pictures
Models of low-level saliency predict that when we first look at a photograph our first few eye movements should be made towards visually conspicuous objects. Two experiments investigated this
What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition.
TLDR
The present data suggest that saliency cannot account for scanpaths and that incorporating these sequences could improve model predictions, and similarity between scan paths made at multiple viewings of the same stimulus suggests that repetitive scan paths also contribute to where people look.
Predicting human gaze using low-level saliency combined with face detection
TLDR
It is demonstrated that a combined model of face detection and low-level saliency significantly outperforms a low- level model in predicting locations humans fixate on, based on eye-movement recordings of humans observing photographs of natural scenes, most of which contained at least one person.
A saliency-based search mechanism for overt and covert shifts of visual attention
Quantitative Analysis of Human-Model Agreement in Visual Saliency Modeling: A Comparative Study
TLDR
This study allows one to assess the state-of-the-art visual saliency modeling, helps to organizing this rapidly growing field, and sets a unified comparison framework for gauging future efforts, similar to the PASCAL VOC challenge in the object recognition and detection domains.
Saccadic and perceptual performance in visual search tasks. I. Contrast detection and discrimination.
TLDR
The results demonstrate that the accuracy of the first saccade provides much information about the observer's perceptual state at the time of the saccadic decision and provide evidence that saccades and perception use similar visual processing mechanisms for contrast detection and discrimination.
Free viewing of dynamic stimuli by humans and monkeys.
TLDR
It is shown that while humans were highly consistent, monkeys were more heterogeneous and were best predicted by the saliency model, and strong similarities existed between both species, especially when focusing analysis onto high-interest targets.
...
...