Interactive Markerless Articulated Hand Motion Tracking Using RGB and Depth Data

Abstract

Tracking the articulated 3D motion of the hand has important applications, for example, in human-computer interaction and teleoperation. We present a novel method that can capture a broad range of articulated hand motions at interactive rates. Our hybrid approach combines, in a voting scheme, a discriminative, part-based pose retrieval method with a generative pose estimation method based on local optimization. Color information from a multi-view RGB camera setup along with a person-specific hand model are used by the generative method to find the pose that best explains the observed images. In parallel, our discriminative pose estimation method uses fingertips detected on depth data to estimate a complete or partial pose of the hand by adopting a part-based pose retrieval strategy. This part-based strategy helps reduce the search space drastically in comparison to a global pose retrieval strategy. Quantitative results show that our method achieves state-of-the-art accuracy on challenging sequences and a near-real time performance of 10 fps on a desktop computer.

DOI: 10.1109/ICCV.2013.305

Extracted Key Phrases

020402014201520162017
Citations per Year

115 Citations

Semantic Scholar estimates that this publication has 115 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Sridhar2013InteractiveMA, title={Interactive Markerless Articulated Hand Motion Tracking Using RGB and Depth Data}, author={Srinath Sridhar and Antti Oulasvirta and Christian Theobalt}, journal={2013 IEEE International Conference on Computer Vision}, year={2013}, pages={2456-2463} }