Bottom-up spatiotemporal visual attention model for video analysis

@inproceedings{Rapantzikos2007BottomupSV,
  title={Bottom-up spatiotemporal visual attention model for video analysis},
  author={Konstantinos Rapantzikos and Nicolas Tsapatsoulis and Yannis Avrithis and Stefanos D. Kollias},
  year={2007}
}
The human visual system (HVS) has the ability to fixate quickly on the most informative (salient) regions of a scene and therefore reducing the inherent visual uncertainty. Computational visual attention (VA) schemes have been proposed to account for this important characteristic of the HVS. A video analysis framework based on a spatiotemporal VA model is presented. A novel scheme has been proposed for generating saliency in video sequences by taking into account both the spatial extent and… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 24 CITATIONS

Particle Filtering Based Visual Attention Model for Moving Target Detection

  • 2018 14th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)
  • 2018

A Phase Field Variational Model with Arctangent Regularization for Saliency Detection

  • 2017 IEEE Winter Applications of Computer Vision Workshops (WACVW)
  • 2017
VIEW 1 EXCERPT
CITES METHODS

References

Publications referenced by this paper.
SHOWING 1-10 OF 42 REFERENCES

Generalizing Epipolar-Plane Image Analysis on the spatiotemporal surface

  • International Journal of Computer Vision
  • 1988
VIEW 3 EXCERPTS
HIGHLY INFLUENTIAL

Scale & Affine Invariant Interest Point Detectors

  • International Journal of Computer Vision
  • 2004
VIEW 1 EXCERPT