Learn More
Otoacoustic emissions (OAEs) are useful for studying medial olivocochlear (MOC) efferents, but several unresolved methodological issues cloud the interpretation of the data they produce. Most efferent assays use a "probe stimulus" to produce an OAE and an "elicitor stimulus" to evoke efferent activity and thereby change the OAE. However, little attention(More)
In this article, we describe and interpret a set of acoustic and linguistic features that characterise emotional/emotion-related user states – confined to the one database processed: four classes in a German corpus of children interacting with a pet robot. To this end, we collected a very large feature vector consisting of more than 4000 features extracted(More)
Classification performance of emotional user states found in realistic, spontaneous speech is not very high, compared to the performance reported for acted speech in the literature. This might be partly due to the difficulty of providing reliable annotations, partly due to suboptimal feature vectors used for classification, and partly due to the difficulty(More)
Subjects with brainstem lesions due to either an infarct or multiple sclerosis (MS) underwent two types of binaural testing (lateralization testing and interaural discrimination) for three types of sounds (clicks and high and low frequency narrow-band noise) with two kinds of interaural differences (level and time). Two major types of abnormalities were(More)
BACKGROUND Many studies have suggested that cognitive training can result in cognitive gains in healthy older adults. We investigated whether personalized computerized cognitive training provides greater benefits than those obtained by playing conventional computer games. METHODS This was a randomized double-blind interventional study. Self-referred(More)
In this paper, we report on classification results for emotional user states (4 classes, German database of children interacting with a pet robot). Starting with 5 emotion labels per word, we obtained chunks with different degrees of prototypicality. Six sites computed acoustic and linguistic features independently from each other. A total of 4232 features(More)
Traditionally, it has been assumed that pitch is the most important prosodic feature for the marking of prominence, and of other phenomena such as the marking of boundaries or emotions. This role has been put into question by recent studies. As nowadays larger databases are always being processed automatically, it is not clear up to what extent the possibly(More)
In this paper, we report on classification results for emotional user states (4 classes, German database of children interacting with a pet robot). Six sites computed acoustic and linguistic features independently from each other, following in part different strategies. A total of 4244 features were pooled together and grouped into 12 low level descriptor(More)
OBJECTIVES AND METHODS Four sets of measurements were obtained from 11 patients (44-80 years old) with small, localized pontine lesions due to vascular disease: (1) Monaural auditory brain-stem evoked potentials (ABEPs; peaks I to VI); (2) Binaural ABEPs processed for their binaural interaction components (BICs) in the latency range of peaks IV to VI; (3)(More)
Lateralization and just-noticeable difference (jnd) measurements relative to the center were tested in a large group of patients with pontine lesions caused either by stroke or multiple sclerosis. Stimuli included binaural clicks, and low- and high-frequency narrow-band noise bursts. Two major types of abnormalities were revealed in the lateralization(More)