An integration of bottom-up and top-down salient cues on RGB-D data: saliency from objectness versus non-objectness

@article{Imamoglu2018AnIO,
  title={An integration of bottom-up and top-down salient cues on RGB-D data: saliency from objectness versus non-objectness},
  author={Nevrez Imamoglu and Wataru Shimoda and Chi Zhang and Yuming Fang and Asako Kanezaki and Keiji Yanai and Yoshifumi Nishida},
  journal={Signal, Image and Video Processing},
  year={2018},
  volume={12},
  pages={307-314}
}
Bottom-up and top-down visual cues are two types of information that helps the visual saliency models. These salient cues can be from spatial distributions of the features (space-based saliency) or contextual/task-dependent features (object-based saliency). Saliency models generally incorporate salient cues either in bottom-up or top-down norm separately. In this work, we combine bottom-up and top-down cues from both space- and object-based salient features on RGB-D data. In addition, we also… 

A Dynamic Bottom-Up Saliency Detection Method for Still Images

TLDR
An unsupervised algorithm to predict the dynamic evolution of bottom-up saliency in images and can predict an image’s salient regions better than the static methods as saliency detection is inherently a dynamic process.

RGB-D salient object detection: A survey

TLDR
This paper provides a comprehensive survey of RGB-D based salient object detection models from various perspectives, and review related benchmark datasets in detail, and investigates the ability of existing models to detect salient objects.

Spatio-temporal saliency detection using objectness measure

TLDR
A new approach based on the spatial and temporal information of the input video frame which aims to find the similar salient objects is proposed, and objectness measure is performed to highlight the regions that may contain the object of interest.

Exploring to learn visual saliency: The RL-IAC approach

Attention and behaviour on fashion retail websites: an eye-tracking study

PurposeThe purpose of this paper is to identify attention, cognitive and affective responses towards a fashion retailer's website and the behavioural outcomes when shopping

References

SHOWING 1-10 OF 47 REFERENCES

Boosting bottom-up and top-down visual features for saliency estimation

  • A. Borji
  • Computer Science
    2012 IEEE Conference on Computer Vision and Pattern Recognition
  • 2012
TLDR
The boosting model outperforms 27 state-of-the-art models and is so far the closest model to the accuracy of human model for fixation prediction, and successfully detects the most salient object in a scene without sophisticated image processings such as region segmentation.

An Integrated Model of Top-Down and Bottom-Up Attention for Optimizing Detection Speed

  • Vidhya NavalpakkamL. Itti
  • Psychology
    2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)
  • 2006
TLDR
Testing on 750 artificial and natural scenes shows that the model’s predictions are consistent with a large body of available literature on human psychophysics of visual search, suggesting that it may provide good approximation of how humans combine bottom-up and top-down cues.

Spatial Visual Attention for Novelty Detection: A Space-based Saliency Model in 3D Using Spatial Memory

TLDR
Experimental results demonstrate that high accuracy for novelty detection can be obtained, and computational time can be reduced for existing state of the art detection and tracking models with the proposed algorithm.

Saliency Detection via Graph-Based Manifold Ranking

TLDR
This work considers both foreground and background cues in a different way and ranks the similarity of the image elements with foreground cues or background cues via graph-based manifold ranking, defined based on their relevances to the given seeds or queries.

RGBD Salient Object Detection: A Benchmark and Algorithms

TLDR
A simple fusion framework that combines existing RGB-produced saliency with new depth-induced saliency and a specialized multi-stage RGBD model is proposed which takes account of both depth and appearance cues derived from low-level feature contrast, mid-level region grouping and high-level priors enhancement.

Visual saliency based on multiscale deep features

  • Guanbin LiYizhou Yu
  • Computer Science
    2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2015
TLDR
This paper discovers that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks.

Saliency detection by multi-context deep learning

TLDR
This paper proposes a multi-context deep learning framework for salient object detection that employs deep Convolutional Neural Networks to model saliency of objects in images and investigates different pre-training strategies to provide a better initialization for training the deep neural networks.

Models of bottom-up and top-down visual attention

TLDR
A detailed computational model of basic pattern vision in humans and its modulation by top-down attention is presented, able to quantitatively account for all observations by assuming that attention strengthens the non-linear cortical interactions among visual neurons.

Deep Contrast Learning for Salient Object Detection

  • Guanbin LiYizhou Yu
  • Computer Science, Environmental Science
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
TLDR
This paper proposes an end-to-end deep contrast network that significantly improves the state of the art in salient object detection and extracts segment-wise features very efficiently, and better models saliency discontinuities along object boundaries.

Top-down visual saliency via joint CRF and dictionary learning

TLDR
This paper proposes a novel top-down saliency model that jointly learns a Conditional Random Field (CRF) and a discriminative dictionary and proposes a max-margin approach to train the dictionary modulated by CRF, and meanwhile a CRF with sparse coding.