Learning Object Intrinsic Structure for Robust Visual Tracking

Abstract

In this paper, a novel method to learn the intrinsic object structure for robust visual tracking is proposed. The basic assumption is that the parameterized object state lies on a low dimensional manifold and can be learned from training data. Based on this assumption, firstly we derived the dimensionality reduction and density estimation algorithm for unsupervised learning of object intrinsic representation, the obtained non-rigid part of object state reduces even to 2 dimensions. Secondly the dynamical model is derived and trained based on this intrinsic representation. Thirdly the learned intrinsic object structure is integrated into a particle-filter style tracker. We will show that this intrinsic object representation has some interesting properties and based on which the newly derived dynamical model makes particle-filter style tracker more robust and reliable. Experiments show that the learned tracker performs much better than existing trackers on the tracking of complex non-rigid motions such as fish twisting with self-occlusion and large inter-frame lip motion. The proposed method also has the potential to solve other type of tracking problems.

DOI: 10.1109/CVPR.2003.1211474

Extracted Key Phrases

7 Figures and Tables

Statistics

01020'04'06'08'10'12'14'16
Citations per Year

96 Citations

Semantic Scholar estimates that this publication has 96 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Wang2003LearningOI, title={Learning Object Intrinsic Structure for Robust Visual Tracking}, author={Qiang Wang and Guangyou Xu and Haizhou Ai}, booktitle={CVPR}, year={2003} }