Saliency-aware geodesic video object segmentation

Abstract

We introduce an unsupervised, geodesic distance based, salient video object segmentation method. Unlike traditional methods, our method incorporates saliency as prior for object via the computation of robust geodesic measurement. We consider two discriminative visual features: spatial edges and temporal motion boundaries as indicators of foreground object locations. We first generate framewise spatiotemporal saliency maps using geodesic distance from these indicators. Building on the observation that foreground areas are surrounded by the regions with high spatiotemporal edge values, geodesic distance provides an initial estimation for foreground and background. Then, high-quality saliency results are produced via the geodesic distances to background regions in the subsequent frames. Through the resulting saliency maps, we build global appearance models for foreground and background. By imposing motion continuity, we establish a dynamic location model for each frame. Finally, the spatiotemporal saliency maps, appearance models and dynamic location models are combined into an energy minimization framework to attain both spatially and temporally coherent object segmentation. Extensive quantitative and qualitative experiments on benchmark video dataset demonstrate the superiority of the proposed method over the state-of-the-art algorithms.

DOI: 10.1109/CVPR.2015.7298961

Extracted Key Phrases

8 Figures and Tables

0204060201520162017
Citations per Year

100 Citations

Semantic Scholar estimates that this publication has 100 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Wang2015SaliencyawareGV, title={Saliency-aware geodesic video object segmentation}, author={Wenguan Wang and Jianbing Shen and Fatih Murat Porikli}, journal={2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2015}, pages={3395-3402} }