Vered Aharonson

Learn More
Classification performance of emotional user states found in realistic, spontaneous speech is not very high, compared to the performance reported for acted speech in the literature. This might be partly due to the difficulty of providing reliable annotations, partly due to suboptimal feature vectors used for classification, and partly due to the difficulty(More)
In this paper, we report on classification results for emotional user states (4 classes, German database of children interacting with a pet robot). Six sites computed acoustic and linguistic features independently from each other, following in part different strategies. A total of 4244 features were pooled together and grouped into 12 low level descriptor(More)
In this article, we describe and interpret a set of acoustic and linguistic features that characterise emotional/emotion-related user states – confined to the one database processed: four classes in a German corpus of children interacting with a pet robot. To this end, we collected a very large feature vector consisting of more than 4000 features extracted(More)
Otoacoustic emissions (OAEs) are useful for studying medial olivocochlear (MOC) efferents, but several unresolved methodological issues cloud the interpretation of the data they produce. Most efferent assays use a “probe stimulus” to produce an OAE and an “elicitor stimulus” to evoke efferent activity and thereby change the OAE. However, little attention(More)
Subjects with brainstem lesions due to either an infarct or multiple sclerosis (MS) underwent two types of binaural testing (lateralization testing and interaural discrimination) for three types of sounds (clicks and high and low frequency narrow-band noise) with two kinds of interaural differences (level and time). Two major types of abnormalities were(More)
In this paper, we report on classification results for emotional user states (4 classes, German database of children interacting with a pet robot). Starting with 5 emotion labels per word, we obtained chunks with different degrees of prototypicality. Six sites computed acoustic and linguistic features independently from each other. A total of 4232 features(More)
Traditionally, it has been assumed that pitch is the most important prosodic feature for the marking of prominence, and of other phenomena such as the marking of boundaries or emotions. This role has been put into question by recent studies. As nowadays larger databases are always being processed automatically, it is not clear up to what extent the possibly(More)
BACKGROUND We previously described software that we have developed for use in the evaluation of mild cognitive impairment (MCI). Our previous study included an aged nondemented population with memory complaints (n = 41) that was relatively homogenous in terms of education, clinical history, neurological examination, and Mini-Mental Status Examination (MMSE)(More)
We designed a novel computer controlled environment that could elicit emotions in subjects while they were uttering short identical phrases. The paradigm was based on Damasio's experiment for eliciting apprehension and was implemented by a voice activated computer game. Recordings of dozens of identical sentences were collected per subject, which were(More)
OBJECTIVES AND METHODS Four sets of measurements were obtained from 11 patients (44-80 years old) with small, localized pontine lesions due to vascular disease: (1) Monaural auditory brain-stem evoked potentials (ABEPs; peaks I to VI); (2) Binaural ABEPs processed for their binaural interaction components (BICs) in the latency range of peaks IV to VI; (3)(More)