Real-time visual saliency by Division of Gaussians

@article{Katramados2011RealtimeVS,
  title={Real-time visual saliency by Division of Gaussians},
  author={Ioannis Katramados and T. Breckon},
  journal={2011 18th IEEE International Conference on Image Processing},
  year={2011},
  pages={1701-1704}
}
This paper introduces a novel method for deriving visual saliency maps in real-time without compromising the quality of the output. This is achieved by replacing the computationally expensive centre-surround filters with a simpler mathematical model named Division of Gaussians (DIVoG). The results are compared to five other approaches, demonstrating at least six times faster execution than the current state-of-the-art whilst maintaining high detection accuracy. Given the multitude of computer… Expand
Real-Time Visual Saliency Detection Using Gaussian Distribution
TLDR
A novel global contrast method to generate full resolution saliency maps using Gaussian distribution model that could be implemented in real-time with a higher accuracy and faster than existing methods. Expand
Psychovisual saliency in color images
TLDR
A Bottom-Up computational model is described that simulates the psycho-visual model of saliency based on features of intensity and color, which gives sequential priorities to objects which other computational models cannot account for. Expand
Fast Salient Object Detection in Non-stationary Video Sequences Based on Spatial Saliency Maps
TLDR
Various fast techniques suitable to extract intensity, color, contrast, edge, angle, and symmetry features from the keyframes of non-stationary video sequences with two main purposes: removal of salient objects and estimate a motion in background more accurately. Expand
Dense gradient-based features (DEGRAF) for computationally efficient and invariant feature extraction in real-time applications
TLDR
Two variants of Dense Gradient-based Features (DeGraF) are presented, of which the signal-to-noise based approach is shown to perform admirably against the state of the art in terms of feature density, computational efficiency and feature stability. Expand
Rapid learning-based video stereolization using graphic processing unit acceleration
TLDR
Experimental results demonstrate that the proposed rapid learning-based video stereolization using a graphic processing unit (GPU) acceleration is nearly 180 times faster than CPU-based processing and achieves a good performance comparable to the-state-of-the-art ones. Expand
Posture estimation for improved photogrammetric localization of pedestrians in monocular infrared imagery
Target tracking complexity within conventional video imagery can be fundamentally attributed to the ambiguity associated with actual 3D scene position of a given tracked object in relation to itsExpand
A photogrammetric approach for real-time 3D localization and tracking of pedestrians in monocular infrared imagery
Target tracking within conventional video imagery poses a significant challenge that is increasingly being addressed via complex algorithmic solutions. The complexity of this problem can beExpand
Human pose classification within the context of near-IR imagery tracking
We address the challenge of human behaviour analysis within automated image understanding. Whilst prior work concentrates on this task within visible-band (EO) imagery, by contrast we target basicExpand
Using compressed audio-visual words for multi-modal scene classification
  • Jan J. Kurcius, T. Breckon
  • Computer Science
  • 2014 International Workshop on Computational Intelligence for Multimedia Understanding (IWCIM)
  • 2014
TLDR
This work extends the classical bag-of-words approach over both audio and video feature spaces, whereby the concept of compressive sensing is introduced as a novel methodology for multi-modal fusion via audio-visual feature dimensionality reduction. Expand
On Cross-Spectral Stereo Matching using Dense Gradient Features
TLDR
This work deals with the recovery of dense depth information from thermal (far infrared spectrum) and optical (visible spectrum) image pairs where large differences in the characteristics of image pairs make this task significantly more challenging than the common stereo case. Expand
...
1
2
...

References

SHOWING 1-10 OF 21 REFERENCES
Saliency Detection: A Spectral Residual Approach
TLDR
A simple method for the visual saliency detection is presented, independent of features, categories, or other forms of prior knowledge of the objects, and a fast method to construct the corresponding saliency map in spatial domain is proposed. Expand
A Real-time Visual Attention System Using Integral Images
TLDR
A method for achieving fast, real-time capable system performance with high accuracy in attention systems based on smart feature computation techniques based on integral images is presented. Expand
Saliency detection using maximum symmetric surround
TLDR
This paper introduces a method for salient region detection that retains the advantages of such saliency maps while overcoming their shortcomings, and compares it to six state-of-the-art salient region Detection methods using publicly available ground truth. Expand
Graph-Based Visual Saliency
TLDR
A new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is proposed, which powerfully predicts human fixations on 749 variations of 108 natural images, achieving 98% of the ROC area of a human-based control, whereas the classical algorithms of Itti & Koch achieve only 84%. Expand
Frequency-tuned salient region detection
TLDR
This paper introduces a method for salient region detection that outputs full resolution saliency maps with well-defined boundaries of salient objects that outperforms the five algorithms both on the ground-truth evaluation and on the segmentation task by achieving both higher precision and better recall. Expand
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
TLDR
A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented, which breaks down the complex problem of scene understanding by rapidly selecting conspicuous locations to be analyzed in detail. Expand
Visual saliency model for robot cameras
TLDR
A fast approximation to a Bayesian model of visual saliency recently proposed in the literature is proposed, which can run in real time on current computers at very little computational cost, leaving plenty of CPU cycles for other tasks. Expand
Bottom-up visual saliency map using wavelet transform domain
TLDR
A method to compute the saliency map in the wavelet transform domain is explored and provides more accurate salient regions compared to the other two methods while retaining a resolution which the salient regions are visually identifiable. Expand
MAPS: Multiscale Attention-Based PreSegmentation of Color Images
TLDR
This paper reports a novel Multiscale Attention-based Pre-Segmentation method (MAPS) which is built around the multi-feature, multiscale, saliency-based model of visual attention, provided by the attention algorithm. Expand
Automatic interesting object extraction from images using complementary saliency maps
TLDR
This paper proposes a novel object extraction approach by integrating two kinds of "complementary" saliency maps, i.e., sketch-like and envelope-like maps, which outperforms six state-of-art saliency-based methods remarkably in automatic object extraction, and is even comparable to some interactive approaches. Expand
...
1
2
3
...