Vision-Based Classification of Social Gestures in Videochat Sessions

Abstract

This paper describes the design and evaluation of the vision-based classification of social gestures, such as handshake, hug, high-five, etc. This is a component of the mediated social touch systems, which can be incorporated into ShareTable and SqueezeBands system to achieve automated gestures recognition and transmission of the touch between the users in real time. The results from our pilot study show the recognition accuracy of each gestures, and they indicate that significant future work is necessary to improve its practical feasibility in the mediated social touch applications. Author

3 Figures and Tables

Cite this paper

@article{Yao2017VisionBasedCO, title={Vision-Based Classification of Social Gestures in Videochat Sessions}, author={Yuan Yao and Svetlana Yarosh}, journal={CoRR}, year={2017}, volume={abs/1707.02654} }