Learn More
The article describes a database of emotional speech. Ten actors (5 female and 5 male) simulated the emotions, producing 10 German utterances (5 short and 5 longer sentences) which could be used in everyday communication and are interpretable in all applied emotions. The recordings were taken in an anechoic chamber with high-quality recording equipment. In(More)
Recent data on prosodic features of emotional speech in German are reported. Ten sentences, spoken by actors in a happy, fearful, sad, bored, angry and a neutral way served as the basis of the analyses. The features under investigation are 1. different range parameters (differences between sentence accent peak, word accent peaks, minima between accent peaks(More)
Emotions influence a person's way of speaking, and it is possible to identify the emotional state of a speaker by merely listening to spoken utterances. The purpose of this study is to distinguish between basic emotions by prosodic features, in particular by characteristics of the fundamental frequency (F 0). In addition to the measurement of global(More)
The present study aims at examining vocal expression of emotion. Emotionally loaded speech material produced by actors was analyzed with reference to the accuracy of articulation as well as to the duration of syllables and segments. It was investigated whether the vocal expression of several emotions (anger, happiness, fear and sadness) differ from one(More)
The place of articulation feature for stop consonants is subject to many errors in speech processing by hearing-impaired listeners. Attempts to improve the recognition of initial and final stop consonants by lowering the level of the first formant or-with a different approach-by narrowing the formant bandwidth of the first five formants only very partially(More)
This study examines the differences between young and old adult voices. Acoustic cues in voices that enable listeners to recognize a speaker's vocal age are specified as well as acoustic cues that straightly indicate the speaker's chronological age. Electroglottographic data were used to directly examine glottal behaviour in aging voices. We found a strong(More)
The authors propose a framework for audiovisual speech synthesis systems [1] and present a first implementation of the framework [2], which is called MASSY-Modular Audiovisual Speech SYnthesizer. This paper describes how the audiovisual speech synthesis system, the 'talking head', works, how it can be integrated into web-applications, and why it is(More)
This paper presents the results of open quotient (OQ) measurements in electroglottographic (EGG) signals of young (18-30 years) and elderly (60-82 years) male and female speakers. The paper further presents quantitative results of the relation between the EGG OQ and the perception of a speaker's age. Higgins and Saxman found a decreased EGG OQ with(More)