An integration of bottom-up and top-down salient cues on RGB-D data: saliency from objectness versus non-objectness

  title={An integration of bottom-up and top-down salient cues on RGB-D data: saliency from objectness versus non-objectness},
  author={Nevrez Imamoglu and Wataru Shimoda and Chi Zhang and Yuming Fang and Asako Kanezaki and Keiji Yanai and Yoshifumi Nishida},
  journal={Signal, Image and Video Processing},
Bottom-up and top-down visual cues are two types of information that helps the visual saliency models. These salient cues can be from spatial distributions of the features (space-based saliency) or contextual/task-dependent features (object-based saliency). Saliency models generally incorporate salient cues either in bottom-up or top-down norm separately. In this work, we combine bottom-up and top-down cues from both space- and object-based salient features on RGB-D data. In addition, we also… 
A Dynamic Bottom-Up Saliency Detection Method for Still Images
An unsupervised algorithm to predict the dynamic evolution of bottom-up saliency in images and can predict an image’s salient regions better than the static methods as saliency detection is inherently a dynamic process.
RGB-D salient object detection: A survey
This paper provides a comprehensive survey of RGB-D based salient object detection models from various perspectives, and review related benchmark datasets in detail, and investigates the ability of existing models to detect salient objects.
Spatio-temporal saliency detection using objectness measure
A new approach based on the spatial and temporal information of the input video frame which aims to find the similar salient objects is proposed, and objectness measure is performed to highlight the regions that may contain the object of interest.
Attention and behaviour on fashion retail websites: an eye-tracking study
PurposeThe purpose of this paper is to identify attention, cognitive and affective responses towards a fashion retailer's website and the behavioural outcomes when shopping
Exploring to learn visual saliency: The RL-IAC approach


Boosting bottom-up and top-down visual features for saliency estimation
  • A. Borji
  • Computer Science
    2012 IEEE Conference on Computer Vision and Pattern Recognition
  • 2012
The boosting model outperforms 27 state-of-the-art models and is so far the closest model to the accuracy of human model for fixation prediction, and successfully detects the most salient object in a scene without sophisticated image processings such as region segmentation.
An Integrated Model of Top-Down and Bottom-Up Attention for Optimizing Detection Speed
  • Vidhya Navalpakkam, L. Itti
  • Psychology
    2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)
  • 2006
Testing on 750 artificial and natural scenes shows that the model’s predictions are consistent with a large body of available literature on human psychophysics of visual search, suggesting that it may provide good approximation of how humans combine bottom-up and top-down cues.
Spatial Visual Attention for Novelty Detection: A Space-based Saliency Model in 3D Using Spatial Memory
Experimental results demonstrate that high accuracy for novelty detection can be obtained, and computational time can be reduced for existing state of the art detection and tracking models with the proposed algorithm.
Saliency Detection via Graph-Based Manifold Ranking
This work considers both foreground and background cues in a different way and ranks the similarity of the image elements with foreground cues or background cues via graph-based manifold ranking, defined based on their relevances to the given seeds or queries.
RGBD Salient Object Detection: A Benchmark and Algorithms
A simple fusion framework that combines existing RGB-produced saliency with new depth-induced saliency and a specialized multi-stage RGBD model is proposed which takes account of both depth and appearance cues derived from low-level feature contrast, mid-level region grouping and high-level priors enhancement.
Visual saliency based on multiscale deep features
  • Guanbin Li, Yizhou Yu
  • Computer Science
    2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2015
This paper discovers that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks.
Saliency detection by multi-context deep learning
This paper proposes a multi-context deep learning framework for salient object detection that employs deep Convolutional Neural Networks to model saliency of objects in images and investigates different pre-training strategies to provide a better initialization for training the deep neural networks.
Models of bottom-up and top-down visual attention
A detailed computational model of basic pattern vision in humans and its modulation by top-down attention is presented, able to quantitatively account for all observations by assuming that attention strengthens the non-linear cortical interactions among visual neurons.
Distinct Class-Specific Saliency Maps for Weakly Supervised Semantic Segmentation
A weakly supervised semantic segmentation method which is based on CNN-based class-specific saliency maps and fully-connected CRF and has outperformed state-of-the-art results with the PASCAL VOC 2012 dataset under the weakly-supervised setting.
Saliency Optimization from Robust Background Detection
This work proposes a robust background measure, called boundary connectivity, which characterizes the spatial layout of image regions with respect to image boundaries and is much more robust and presents unique benefits that are absent in previous saliency measures.