• Publications
  • Influence
Quantitative Analysis of Human-Model Agreement in Visual Saliency Modeling: A Comparative Study
TLDR
Visual attention is a process that enables biological and machine vision systems to select the most relevant regions from a scene. Expand
  • 529
  • 53
  • PDF
Analysis of Scores, Datasets, and Models in Visual Saliency Prediction
TLDR
We quantitatively compare 32 state-of-the-art models (using the shuffled AUC score to discount center-bias) on 4 benchmark eye movement datasets, for prediction of human fixation locations and scan path sequence. Expand
  • 180
  • 20
  • PDF
Salient Object Detection: A Benchmark
TLDR
We provide a quantitative comparison of 35 state-of-the-art saliency detection models and show that some models perform consistently better than the others. Expand
  • 96
  • 4
What/Where to Look Next? Modeling Top-Down Visual Attention in Complex Interactive Environments
TLDR
In this paper, we describe new task-dependent approaches for modeling top-down overt visual attention based on graphical models for probabilistic inference and reasoning. Expand
  • 93
  • 3
  • PDF
Probabilistic learning of task-specific visual attention
TLDR
We propose a unified Bayesian approach for modeling task-driven visual attention which can boost performance of several approaches in computer vision. Expand
  • 98
  • 2
  • PDF
Adaptive object tracking by learning background context
TLDR
A particle filter adapting to background changes can efficiently track objects in natural scenes and results in higher tracking results than the basic approach. Expand
  • 95
  • 2
  • PDF
What stands out in a scene? A study of human explicit saliency judgment
TLDR
We evaluate whether humans have explicit and conscious access to the saliency computations believed to contribute to guiding attention and eye movements. Expand
  • 106
  • 2
  • PDF
Objects do not predict fixations better than early saliency: a re-analysis of Einhauser et al.'s data.
Einhäuser, Spain, and Perona (2008) explored an alternative hypothesis to saliency maps (i.e., spatial image outliers) and claimed that "objects predict fixations better than early saliency." To testExpand
  • 56
  • 1
  • PDF
Computational Modeling of Top-down Visual Attention in Interactive Environments
Modeling how visual saliency guides the deployment of attention over visual scenes has attracted much interest recently — among both computer vision and experimental/computational researchers — sinceExpand
  • 41
  • 1
  • PDF
An Object-Based Bayesian Framework for Top-Down Visual Attention
TLDR
We introduce a new task-independent framework to model top-down overt visual attention based on graphical models for probabilistic inference and reasoning. Expand
  • 19
  • PDF
...
1
2
...