Super-Trajectory for Video Segmentation

Abstract

We propose a semi-supervised video segmentation via an efficient video representation, called as “super-trajectory”. Each super-trajectory corresponds to a group of compact trajectories that exhibit consistent motion patterns, similar appearance and close spatiotemporal relationships. To handle occlusions and drifts, we develop a trajectory generation method based on probabilistic model, which is more reasonable and interpretable than traditional trajectory methods using hard thresholding. We then modify a density peaks based clustering algorithm for reliably grouping trajectories, thus capturing a rich set of spatial and temporal relations among trajectories. Via this discriminative video representation, manual effort on the first frame can be efficiently propagated into the rest of frames. Experimental results on challenging benchmark demonstrate the proposed approach is capable of distinguishing object from complex background and even re-identifying object with longterm occlusions.

4 Figures and Tables

Cite this paper

@article{Wang2017SuperTrajectoryFV, title={Super-Trajectory for Video Segmentation}, author={Wenguan Wang and Shenjian Bing}, journal={CoRR}, year={2017}, volume={abs/1702.08634} }