A semantic occlusion model for human pose estimation from a single depth image

Abstract

Human pose estimation from depth data has made significant progress in recent years and commercial sensors estimate human poses in real-time. However, state-of-the-art methods fail in many situations when the humans are partially occluded by objects. In this work, we introduce a semantic occlusion model that is incorporated into a regression forest approach for human pose estimation from depth data. The approach exploits the context information of occluding objects like a table to predict the locations of occluded joints. In our experiments on synthetic and real data, we show that our occlusion model increases the joint estimation accuracy and outperforms the commercial Kinect 2 SDK for occluded joints.

DOI: 10.1109/CVPRW.2015.7301338

Extracted Key Phrases

7 Figures and Tables

Cite this paper

@article{Rafi2015ASO, title={A semantic occlusion model for human pose estimation from a single depth image}, author={Umer Rafi and Juergen Gall and Bastian Leibe}, journal={2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)}, year={2015}, pages={67-74} }