Saliency Map Estimation for Omni-Directional Image Considering Prior Distributions

@article{Suzuki2018SaliencyME,
  title={Saliency Map Estimation for Omni-Directional Image Considering Prior Distributions},
  author={Tatsuya Suzuki and Takao Yamanaka},
  journal={2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC)},
  year={2018},
  pages={2079-2084}
}
  • Tatsuya Suzuki, T. Yamanaka
  • Published 17 July 2018
  • Mathematics
  • 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC)
In recent years, the deep learning techniques have been applied to the estimation of saliency maps, which represent probability density functions of fixations when people look at the images. Although the methods of saliency-map estimation have been actively studied for 2-dimensional planer images, the methods for omni-directional images to be utilized in virtual environments had not been studied, until a competition of saliency-map estimation for the omni-directional images was held in ICME2017… 

Figures and Tables from this paper

Dilated Convolutional Neural Networks for Panoramic Image Saliency Prediction
TLDR
An encoder-decoder network is proposed for panoramic image saliency prediction that takes cube map format as input and processes six faces of cube map simultaneously to deal with the image distortions in 360° images.
ATSal: An Attention Based Architecture for Saliency Prediction in 360 Videos
TLDR
A novel attention based (head-eye) saliency model for 360\degree videos, which explicitly encodes global static visual attention allowing expert models to focus on learning the saliency on local patches throughout consecutive frames.
Extending 2D Saliency Models for Head Movement Prediction in 360-Degree Images using CNN-Based Fusion
TLDR
A new framework for effectively applying any 2D saliency prediction method to 360-degree images is proposed that includes a novel convolutional neural network based fusion approach that provides more accurate saliency Prediction while avoiding the introduction of distortions.
Viewport-Dependent Saliency Prediction in 360° Video
TLDR
A novel visual saliency model is proposed, dubbed viewport saliency, to predict fixations over 360° videos and it is found that where people look is affected by the content and location of the viewport in 360° video.
Deep Learning for Omnidirectional Vision: A Survey and New Perspectives
TLDR
This paper presents a systematic and comprehensive review and analysis of the recent progress in DL methods for omnidirectional vision, including a structural and hierarchical taxonomy of the DL methods and a summarization of the latest novel learning strategies and applications.
Rethinking 360° Image Visual Attention Modelling with Unsupervised Learning
TLDR
This paper extends recent advances in contrastive learning to learn latent representations that are sufficiently invariant to be highly effective for spherical saliency prediction as a downstream task and argues that omni-directional images are particularly suited to such an approach due to the geometry of the data domain.
SalGCN: Saliency Prediction for 360-Degree Images Based on Spherical Graph Convolutional Networks
TLDR
This paper proposes a saliency prediction framework for 360-degree images based on graph convolutional networks (SalGCN), which directly applies to the spherical graph signals, and adopts the GICOPix to construct a spherical graph signal from a spherical image in equirectangular projection (ERP) format.
Panoramic convolutions for 360o single-image saliency prediction
TLDR
This model is able to successfully predict saliency in 360o scenes from a single image, outperforming other state-of-the-art approaches for panoramic content, and yielding more precise results that may help in the understanding of users’ behavior when viewing 360o VR content.
State-of-the-Art in 360° Video/Image Processing: Perception, Assessment and Compression
TLDR
This article reviews both datasets and visual attention modelling approaches for 360° video/image, which either utilize the spherical characteristics or visual attention models, and overviews the compression approaches.
Adapting Computer Vision Algorithms for Omnidirectional Video
TLDR
This work gives a high-level overview of these challenges and outline strategies how to adapt computer vision algorithm for the specifics of omnidirectional video.
...
...

References

SHOWING 1-10 OF 15 REFERENCES
SalNet360: Saliency Maps for omni-directional images with CNN
Which saliency weighting for omni directional image quality assessment?
TLDR
An eye-tracking experiment is performed using a HMD and is followed by subsequent gaze analysis to appreciate the visual attention behavior within a view-port, suggesting that most eye-gaze fixations are rather far away from the center of the viewport.
Fully Convolutional DenseNet for Saliency-Map Prediction
TLDR
While the most state-of-the-art models for predicting saliency maps use shallow networks such as VGG-16, this model uses densely connected convolutional networks (DenseNet) with over 150 layers.
SaltiNet: Scan-Path Prediction on 360 Degree Images Using Saliency Volumes
TLDR
SaltiNet is introduced, a deep neural network for scan-path prediction trained on 360-degree images based on a temporal-aware novel representation of saliency information named the saliency volume, which consists of a model trained to generate saliency volumes.
Quantitative Analysis of Human-Model Agreement in Visual Saliency Modeling: A Comparative Study
TLDR
This study allows one to assess the state-of-the-art visual saliency modeling, helps to organizing this rapidly growing field, and sets a unified comparison framework for gauging future efforts, similar to the PASCAL VOC challenge in the object recognition and detection domains.
A Dataset of Head and Eye Movements for 360 Degree Images
TLDR
A dataset of sixty different 360 degree images, each watched by at-least 40 observers is presented and guidelines and tools regarding the procedure to evaluate and compare saliency in omni-directional images are provided.
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
TLDR
A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented, which breaks down the complex problem of scene understanding by rapidly selecting conspicuous locations to be analyzed in detail.
SALICON: Reducing the Semantic Gap in Saliency Prediction by Adapting Deep Neural Networks
TLDR
This paper presents a focused study to narrow the semantic gap with an architecture based on Deep Neural Network (DNN), which leverages the representational power of high-level semantics encoded in DNNs pretrained for object recognition.
DeepGaze II: Reading fixations from deep features trained on object recognition
TLDR
The model uses the features from the VGG-19 deep neural network trained to identify objects in images for saliency prediction with no additional fine-tuning and achieves top performance in area under the curve metrics on the MIT300 hold-out benchmark.
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
...
...