Corpus ID: 24599327

Automatic Identification of Non-Meaningful Body-Movements and What It Reveals About Humans

  title={Automatic Identification of Non-Meaningful Body-Movements and What It Reveals About Humans},
  author={Md. Iftekhar Tanveer and Ru Zhao and Ehsan Hoque},
We present a framework to identify whether a public speaker's body movements are meaningful or non-meaningful ("Mannerisms") in the context of their speeches. In a dataset of 84 public speaking videos from 28 individuals, we extract 314 unique body movement patterns (e.g. pacing, gesturing, shifting body weights, etc.). Online workers and the speakers themselves annotated the meaningfulness of the patterns. We extracted five types of features from the audio-video recordings: disfluency, prosody… Expand


AutoManner: An Automated Interface for Making Public Speakers Aware of Their Mannerisms
An intelligent interface that can automatically extract human gestures using Microsoft Kinect to make speakers aware of their mannerisms is presented, using a sparsity-based algorithm, Shift Invariant Sparse Coding, to automatically extract the patterns of body movements. Expand
Automated Analysis and Prediction of Job Interview Performance
A computational framework for automatically quantifying verbal and nonverbal behaviors in the context of job interviews is presented and recommends to speak more fluently, use fewer filler words, speak as “the authors” (versus “I”), use more unique words, and smile more. Expand
Unsupervised Extraction of Human-Interpretable Nonverbal Behavioral Cues in a Public Speaking Scenario
A framework for unsupervised detection of nonverbal behavioral cues from a collection of motion capture sequences in a public speaking setting is presented, and it is found that the extracted behavioral cues are human-interpretable in the context of public speaking. Expand
Online feedback system for public speakers
The development of Affective Computing has witnessed tremendous number of studies about facial and vocal expression, while bodily expression only comprises the minority. However, with the emerging ofExpand
ROC speak: semi-automated personalized feedback on nonverbal behavior from recorded videos
A framework that couples computer algorithms with human intelligence in order to automatically sense and interpret nonverbal behavior and synthesize Mechanical Turk workers' interpretations, ratings, and comment rankings with the machine-sensed data is presented. Expand
Beat gestures modulate auditory integration in speech perception
In everyday life, people interact with each others through verbal communication but also by spontaneous beat gestures which are a very important part of the paralinguistic context during face-to-faceExpand
Co‐speech gestures influence neural activity in brain regions associated with processing semantic information
It is shown that perceiving hand movements during speech modulates the distributed pattern of neural activation involved in both biological motion perception and discourse comprehension, suggesting listeners attempt to find meaning, not only in the words speakers produce, but also in the hand movements that accompany speech. Expand
Speaker identification on the SCOTUS corpus
The main findings are that a combination of Gaussian mixture models and monophone HMM models attains near‐100% text‐independent identification accuracy on utterances that are longer than one second, and the sampling rate of 11025 Hz achieves the best performance. Expand
Automated prediction and analysis of job interview performance: The role of what you say and how you say it
A computational framework to quantify human behavior in the context of job interviews is provided and it is recommended to speak more fluently, use less filler words, speak as “the authors”, use more unique words, and smile more. Expand
Augmenting Social Interactions: Realtime Behavioural Feedback using Social Signal Processing Techniques
Logue is presented, a system that provides realtime feedback on the presenters' openness, body energy and speech rate during public speaking and analyses the user's nonverbal behaviour using social signal processing techniques and gives visual feedback on a head-mounted display. Expand