E-ffective: A Visual Analytic System for Exploring the Emotion and Effectiveness of Inspirational Speeches

@article{Maher2022EffectiveAV,
  title={E-ffective: A Visual Analytic System for Exploring the Emotion and Effectiveness of Inspirational Speeches},
  author={Kevin M. Maher and Zeyuan Huang and Jiancheng Song and Xiaoming Deng and Yu-Kun Lai and Cuixia Ma and Hao Wang and Yong-Jin Liu and Hongan Wang},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2022},
  volume={28},
  pages={508-517}
}
What makes speeches effective has long been a subject for debate, and until today there is broad controversy among public speaking experts about what factors make a speech effective as well as the roles of these factors in speeches. Moreover, there is a lack of quantitative analysis methods to help understand effective speaking strategies. In this paper, we propose E-ffective, a visual analytic system allowing speaking experts and novices to analyze both the role of speech factors and their… 

In Defence of Visual Analytics Systems: Replies to Critics

— The last decade has witnessed many visual analytics (VA) systems that make successful applications to wide-ranging domains like urban analytics and explainable AI. However, their research rigor and

FaceType

FaceType is an interactive installation that creates an experience of spoken communication through generated text. Inspired by Chinese calligraphy, the project transforms our spoken expression into

References

SHOWING 1-10 OF 37 REFERENCES

EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos

TLDR
This paper introduces EmoCo, an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos and demonstrates the effectiveness of the system in gaining insights into emotioncoherence in presentations.

EmotionMap: Visual Analysis of Video Emotional Content on a Map

TLDR
A novel way of presenting emotion for daily users in 2D geography, fusing spatio-temporal information with emotional data, and developing EmotionDisc which is an effective tool for collecting audiences’ emotion based on emotion representation models.

EmotionCues: Emotion-Oriented Visual Summarization of Classroom Videos

TLDR
EmpirCues, a visual analytics system to easily analyze classroom videos from the perspective of emotion summary and detailed analysis, which integrates emotion recognition algorithms with visualizations is proposed.

Emodiversity and the emotional ecosystem.

TLDR
Two cross-sectional studies across more than 37,000 respondents demonstrate that emodiversity is an independent predictor of mental and physical health--such as decreased depression and doctor's visits--over and above mean levels of positive and negative emotion.

A multi-sensory code for emotional arousal

TLDR
It is shown that variation in the central tendency of the frequency spectrum of a stimulus—its spectral centroid—is used by signal senders to express emotional arousal, and by signal receivers to make emotional arousal judgements.

Semantic Structure, Speech Units and Facial Movements: Multimodal Corpus Analysis of English Public Speaking

TLDR
This study examines connections between the semantic structure and speech units, and characteristics of facial movements in EFL learners’ public speech to define a facial movement model that effectively describes good eye contacts in public speaking.

MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis

TLDR
The MixedEmotions Toolbox leverages the need for functionalities for recognizing emotions from user-generated media content in automated systems by providing tools for text, audio, video, and linked data processing within an easily integrable plug-and-play platform.

Multimodal Analysis of Video Collections: Visual Exploration of Presentation Techniques in TED Talks

  • Aoyu WuHuamin Qu
  • Computer Science
    IEEE Transactions on Visualization and Computer Graphics
  • 2020
TLDR
A visual analytic system to analyze multimodal content in video collections features three views at different levels: the Projection view with novel glyphs to facilitate cluster analysis regarding presentation styles; the Comparison View to present temporal distribution and concurrences of presentation techniques and support intra-cluster analysis.

Evaluating Speech, Face, Emotion and Body Movement Time-series Features for Automated Multimodal Presentation Scoring

TLDR
It is found that different modalities are useful in predicting different aspects, even outperforming a naive human inter-rater agreement baseline for a subset of the aspects analyzed.

Communication is 93% Nonverbal: An Urban Legend Proliferates

Perhaps the best-known numbers within the communication field are those that claim the total meaning of a message is “7 percent verbal, 38 percent vocal, and 55 percent facial.” Despite the fact that