Robust head pose estimation via supervised manifold learning


Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with the pose being the only variable, the face images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background clutter, facial expression, and illumination. To address the problem, we propose to incorporate supervised information (pose angles of training samples) into the process of manifold learning. The process has three stages: neighborhood construction, graph weight computation and projection learning. For the first two stages, we redefine inter-point distance for neighborhood construction as well as graph weight by constraining them with the pose angle information. For Stage 3, we present a supervised neighborhood-based linear feature transformation algorithm to keep the data points with similar pose angles close together but the data points with dissimilar pose angles far apart. The experimental results show that our method has higher estimation accuracy than the other state-of-art algorithms and is robust to identity and illumination variations.

DOI: 10.1016/j.neunet.2014.01.009

Cite this paper

@article{Wang2014RobustHP, title={Robust head pose estimation via supervised manifold learning}, author={Chao Wang and Xubo Song}, journal={Neural networks : the official journal of the International Neural Network Society}, year={2014}, volume={53}, pages={15-25} }