Paul Chippendale

Learn More
This paper describes an automatically annotated multimodal corpus of multi-party meetings. The corpus provides for each subject involved in the experimental sessions information on her/his social behavior and personality traits, as well as audiovisual cues (speech rate, pitch and energy, head orientation, head, hand and body fidgeting). The corpus is based(More)
This paper presents a visual particle filter for tracking a variable number of humans interacting in indoor environments, using multiple cameras. It is built upon a 3-dimensional, descriptive appearance model which features (i) a 3D shape model assembled from simple body part elements and (ii) a fast while still reliable rendering procedure developed on a(More)
The audio based speaker localization and tracking task addressed in CHIL is rather challenging. Since the evaluation data have been collected during real seminars and meetings they present some critical aspects to the localization process. First of all, seminar and meeting rooms are typically characterized by a high reverberation time (for example in the(More)
This paper presents a system to create a spatiotemporal attractiveness GIS layer for mountainous areas brought about by the implementation of novel image processing and pattern matching algorithms. We utilize the freely available Digital Terrain Model of the planet provided by NASA [1] to generate a three-dimensional synthetic model around a viewer‟s(More)
This paper describes a real-time system developed for the derivation of low-level visual cues targeted at the recognition of simple hand, head and body gestures. A novel, adaptive background subtraction technique is presented together with a tool for monitoring repetitive movements, e.g. fidgeting. To monitor subtle body movements in an unconstrained(More)
Smart homes for the aging population have recently started attracting the attention of the research community. One of the problems of interest is this of monitoring the activities of daily living (ADLs) of the elderly aiming at their protection and well-being. In this work, we present our initial efforts to automatically recognize ADLs using multimodal(More)