Holistic Planimetric prediction to Local Volumetric prediction for 3D Human Pose Estimation

Abstract

We propose a novel approach to 3D human pose estimation from a single depth map. Recently, convolutional neural network (CNN) has become a powerful paradigm in computer vision. Many of computer vision tasks have benefited from CNNs, however, the conventional approach to directly regress 3D body joint locations from an image does not yield a noticeably improved performance. In contrast, we formulate the problem as estimating per-voxel likelihood of key body joints from a 3D occupancy grid. We argue that learning a mapping from volumetric input to volumetric output with 3D convolution consistently improves the accuracy when compared to learning a regression from depth map to 3D joint coordinates. We propose a two-stage approach to reduce the computational overhead caused by volumetric representation and 3D convolution: Holistic 2D prediction and Local 3D prediction. In the first stage, Planimetric Network (P-Net) estimates per-pixel likelihood for each body joint in the holistic 2D space. In the second stage, Volumetric Network (V-Net) estimates the per-voxel likelihood of each body joints in the local 3D space around the 2D estimations of the first stage, effectively reducing the computational cost. Our model outperforms existing methods by a large margin in publicly available datasets.

15 Figures and Tables

Cite this paper

@article{Moon2017HolisticPP, title={Holistic Planimetric prediction to Local Volumetric prediction for 3D Human Pose Estimation}, author={Gyeongsik Moon and Ju Yong Chang and Yumin Suh and Kyoung Mu Lee}, journal={CoRR}, year={2017}, volume={abs/1706.04758} }