Unsupervised Extraction of Human-Interpretable Nonverbal Behavioral Cues in a Public Speaking Scenario

Abstract

We present a framework for unsupervised detection of nonverbal behavioral cues---hand gestures, pose, body movements, etc.---from a collection of motion capture (MoCap) sequences in a public speaking setting. We extract the cues by solving a sparse and shift-invariant dictionary learning problem, known as <i>shift-invariant sparse coding</i>. We find that the extracted behavioral cues are human-interpretable in the context of public speaking. Our technique can be applied to automatically identify the common patterns of body movements and the time-instances of their occurrences, minimizing time and efforts needed for manual detection and coding of nonverbal human behaviors.

DOI: 10.1145/2733373.2806350

Extracted Key Phrases

5 Figures and Tables

Cite this paper

@inproceedings{Tanveer2015UnsupervisedEO, title={Unsupervised Extraction of Human-Interpretable Nonverbal Behavioral Cues in a Public Speaking Scenario}, author={Md. Iftekhar Tanveer and Ji Liu and Mohammed E. Hoque}, booktitle={ACM Multimedia}, year={2015} }