James D. Edge

Learn More
Recognition of expressed emotion from speech and facial gestures was investigated in experiments on an audiovisual emotional database. A total of 106 audio and 240 visual features were extracted and then features were selected with Plus l-Take Away r algorithm based on Bhattacharyya distance criterion. In the second step, linear transformation methods,(More)
Motion capture (mocap) data is commonly used to recreate complex human motions in computer graphics. Markers are placed on an actor, and the captured movement of these markers allows us to animate computer-generated characters. Technologies have been introduced which allow this technique to be used not only to retrieve rigid body transformations, but also(More)
We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into phonetic units. A dynamic parameterisation of this data is constructed which maintains the relationship between lip shapes and velocities; within this(More)
This paper presents an investigation of the visual variation on the bilabial plosive consonant /p/ in three coarticulation contexts. The aim is to provide detailed ensemble analysis to assist coarticulation modelling in visual speech synthesis. The underlying dynamics of labeled visual speech units, represented as lip shape, from symmetric VCV utterances,(More)
The relationship between anticonvulsant tolerance to clonazepam and benzodiazepine receptor changes was studied in amygdala kindled rats. Fully kindled rats were given 1 mg/kg clonazepam (clonazepam treated) or vehicle (kindled control) orally three times per day for 4 weeks. During chronic treatment, amygdala stimulation was given twice per week, 30 min(More)
  • 1