Learn More
This paper describes an automatically annotated multimodal corpus of multi-party meetings. The corpus provides for each subject involved in the experimental sessions information on her/his social behavior and personality traits, as well as audiovisual cues (speech rate, pitch and energy, head orientation, head, hand and body fidgeting). The corpus is based(More)
This paper targets the automatic detection of personality traits in a meeting environment by means of audio and visual features; information about the relational context is captured by means of acoustic features designed to that purpose. Two personality traits are considered: Extraversion (from the Big Five) and the Locus of Control. The classification task(More)
[11] Dan Klein and Christopher D. Manning. Fast exact inference with a factored model for natural language parsing. Another more practical line of activity includes an error analysis to identify the classes of errors done by the two algorithms, so that strategies to cope with them can be designed. For Collins' parsers this would imply the introduction of(More)
The user's personality in Human-Computer Interaction (HCI) plays an important role for the overall success of the interaction. The present study focuses on automatically recognizing the Big Five personality traits from 2-5 min long videos, in which the computer interacts using different levels of collaboration, in order to elicit the manifestation of these(More)
Personality plays an important role in the way people manage the images they convey in self-presentations and employment interviews, trying to affect the other"s first impressions and increase effectiveness. This paper addresses the automatically detection of the Big Five personality traits from short (30-120 seconds) self-presentations, by investigating(More)
We propose and investigate a paradigm for activity recognition, distinguishing the “on-going activity” recognition task (OGA) from that addressing “complete activities” (CA). The former starts from a time interval and aims to discover which activities are going on inside it. The latter, in turn, focuses on terminated activities and amounts to taking an(More)
In this paper we compare two interlin-gua representations for speech translation. The basis of this paper is a distri-butional analysis of the C-star II and Nespole databases tagged with inter-lingua representations. The C-star II database has been partially re-tagged with the Nespole interlingua, which enables us to make comparisons on the same data with(More)