Mitsuko Aramaki

Learn More
The present work investigates the relationship between semantic and prosodic (metric) processing in spoken language under 2 attentional conditions (semantic and metric tasks) by analyzing both behavioral and event-related potential (ERP) data. Participants listened to short sentences ending in semantically and/or metrically congruous or incongruous(More)
This paper presents a sound synthesis model that reproduces impact sounds by taking into account both the perceptual and the physical aspects of the sound. For that, we used a subtractive method based on dynamic filtering of noisy input signals that simulates the damping of spectral components. The resulting sound contains the perceptual characteristics of(More)
The aim of these experiments was to compare conceptual priming for linguistic and for a homogeneous class of nonlinguistic sounds, impact sounds, by using both behavioral (percentage errors and RTs) and electrophysiological measures (ERPs). Experiment 1 aimed at studying the neural basis of impact sound categorization by creating typical and ambiguous(More)
The present study investigated the effect of handwriting sonification on graphomotor learning. Thirty-two adults, distributed in two groups, learned four new characters with their non-dominant hand. The experimental design included a pre-test, a training session, and two post-tests, one just after the training sessions and another 24h later. Two characters(More)
32 Computer Music Journal Synthesis of impact sounds is far from a trivial task owing to the high density of modes generally contained in such signals. Several authors have addressed this problem and proposed different approaches to model such sounds. The majority of these models are based on the physics of vibrating structures, as with for instance modal(More)
In this paper, we focused on the identification of the perceptual properties of impacted materials to provide an intuitive control of an impact sound synthesizer. To investigate such properties, impact sounds from everyday life objects, made of different materials (wood, metal and glass), were recorded and analyzed. These sounds were synthesized using an(More)
Nowadays, interactive 3-D environments tend to include both synthesis and spatialization processes to increase the realism of virtual scenes. In typical systems, audio generation is created in two stages: first, a monophonic sound is synthesized (generation of the intrinsic timbre properties) and then it is spatialized (positioned in its environment). In(More)
Xylophone sounds produced by striking wooden bars with a mallet are strongly influenced by the mechanical properties of the wood species chosen by the xylophone maker. In this paper, we address the relationship between the sound quality based on the timbre attribute of impacted wooden bars and the physical parameters characterizing wood species. For this, a(More)