Collaborative strategy for visual object tracking

Abstract

Adaptively learning the difference between object and background, discriminative trackers are able to overcome the complex background problem in visual object tracking. However, they are not robust enough to handle the out-of-plane rotation of object, which reduces recall performance. Meanwhile, allowing individual parts certain criterion of freedom, part-based trackers can better handle the out-of-plane rotation problem. However, they are prone to be affected by complex background, leading to low precision performance. To simultaneously address both issues, we propose a collaborative strategy that makes mutual enhancement between a discriminative tracker and a part-based tracker possible to obtain better overall performance. On one hand, we use validated results from the part-based tracker to update the discriminative tracker for recall performance improvement. On the other hand, based on confident results from the discriminative tracker we adaptively update the part-based tracker for simultaneous precision performance improvement. Experiments on various challenge sequences show that our approach achieved the state-of-the-art performance, which demonstrated the effectiveness of mutual collaboration between the two trackers.

DOI: 10.1007/s11042-017-4633-x

12 Figures and Tables

Cite this paper

@article{Yang2017CollaborativeSF, title={Collaborative strategy for visual object tracking}, author={Yongquan Yang and Ning Chen and Shenlu Jiang}, journal={Multimedia Tools and Applications}, year={2017}, pages={1-21} }