• Corpus ID: 235165919

A Psychophysically Oriented Saliency Map Prediction Model

  title={A Psychophysically Oriented Saliency Map Prediction Model},
  author={Qiang Li},
  • Qiang Li
  • Published 8 November 2020
  • Computer Science
Visual attention is one of the most significant characteristics for selecting and understanding the outside redundancy world. The human vision system cannot process all information simultaneously due to the visual information bottleneck. In order to reduce the redundant input of visual information, the human visual system mainly focuses on dominant parts of scenes. This is commonly known as visual saliency map prediction. This paper proposed a new psychophysical saliency prediction architecture… 

Understanding Saliency Prediction with Deep Convolutional Neural Networks and Psychophysical Models

Convolutional neural networks (CNNs) have achieved great success in natural image saliency prediction. The primary goal of this study is to investigate the performance of saliency prediction in CNN



Saliency estimation using a non-parametric low-level vision model

It is shown that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models.

Predictive coding as a model of the V1 saliency map hypothesis

Saliency Detection: A Spectral Residual Approach

A simple method for the visual saliency detection is presented, independent of features, categories, or other forms of prior knowledge of the objects, and a fast method to construct the corresponding saliency map in spatial domain is proposed.

Visual Saliency Based on Scale-Space Analysis in the Frequency Domain

A new bottom-up paradigm for detecting visual saliency is proposed, characterized by a scale-space analysis of the amplitude spectrum of natural images, and it is shown that the convolution of the image amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale is equivalent to an image saliency detector.

Selection of a best metric and evaluation of bottom-up visual saliency models

SID4VAM: A Benchmark Dataset With Synthetic Images for Visual Attention Modeling

This study reveals that state-of-the-art Deep Learning saliency models do not perform well with synthetic pattern images, instead, models with Spectral/Fourier inspiration outperform others in saliency metrics and are more consistent with human psychophysical experimentation.

A Saliency Detection Model Using Low-Level Features Based on Wavelet Transform

A novel saliency detection model is introduced by utilizing low-level features obtained from the wavelet transform domain to modulate local contrast at a location with its global saliency computed based on the likelihood of the features.

Dynamic visual attention: searching for coding length increments

A dynamic visual attention model based on the rarity of features is proposed and the Incremental Coding Length (ICL) is introduced to measure the perspective entropy gain of each feature to maximize the entropy of the sampled visual features.

Information-theoretic model comparison unifies saliency metrics

This work brings saliency evaluation into the domain of information by framing fixation prediction models probabilistically and calculating information gain, and jointly optimize the scale, the center bias, and spatial blurring of all models within this framework.