Static gesture recognition using features extracted from skeletal data

  • Ra’eesah Mangera
  • Published 2013

Abstract

Gesture recognition has become a popular area of research with applications in medical systems, assistive technologies, entertainment, crisis management, disaster relief and human-machine interaction. This paper presents a static gesture recognition system which uses an Asus Xtion Pro Live sensor to obtain the skeletal model of the user. Typically, joint angles and joint positions have been used as features. However these features do not adequately divide the gesture space, resulting in non-optimal classification accuracy. Therefore to improve the classification accuracy, a new feature vector, combining joint angles and the relative position of the arm joints with respect to the head, is proposed. A k-means classifier is used to cluster each gesture. New gestures are classified using a Euclidean distance metric. The new feature vector is evaluated on a 10 static gesture dataset, consisting of 7 participants. The vector containing only joint angles achieves a classification accuracy of 91.98%. In contrast, the new feature vector containing both joint angles and the relative positions of the arm joint with respect to the head achieves a classification accuracy of over 99%. Keywords—Gesture recognition, depth sensor, Asus Xtion Pro, skeleton model

12 Figures and Tables

Cite this paper

@inproceedings{Mangera2013StaticGR, title={Static gesture recognition using features extracted from skeletal data}, author={Ra’eesah Mangera}, year={2013} }