Learn More
Recognition of expressed emotion from speech and facial gestures was investigated in experiments on an audiovisual emotional database. A total of 106 audio and 240 visual features were extracted and then features were selected with Plus l-Take Away r algorithm based on Bhattacharyya distance criterion. In the second step, linear transformation methods,(More)
The main purpose of the present study was to examine the effects of acute whole body vibration (WBV) on recovery following a 3 km time trial (3 km TT) and high-intensity interval training (HIIT) (8 x 400 m). Post-HIIT measures included 3 km time-trial performance, exercise metabolism and markers of muscle damage (creatine kinase, CK) and inflammation(More)
BACKGROUND Because of practical problems and ethical concerns, few studies of the pharmacokinetics (PK) of acetaminophen (ACET) in infants have been published. OBJECTIVE The goal of this study was to compare the PK of an ACET rectal suppository with a commercially available ACET elixir to complete a regulatory obligation to market the suppository. This(More)
We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into phonetic units. A dynamic parameterisation of this data is constructed which maintains the relationship between lip shapes and velocities; within this(More)
Motion capture (mocap) data is commonly used to recreate complex human motions in computer graphics. Markers are placed on an actor, and the captured movement of these markers allows us to animate computer-generated characters. Technologies have been introduced which allow this technique to be used not only to retrieve rigid body transformations, but also(More)
In this paper a technique is presented for learning audiovisual correlations in non-speech related articulations such as laughs, cries, sneezes and yawns, such that accurate new visual motions may be created given just audio. Our underlying model is data-driven and provides reliable performance given voices the system is familiar with as well as new voices.(More)
The animation of facial expression has become a popular area of research in the past ten years, in particular with its application to avatar technology and naturalistic user interfaces. In this paper we describe a method to animate speech from small fragments of motion-captured sentences. A dataset of domain-specific sentences are captured and phonetically(More)