Chunking by coarticulation in Music-Related Gestures

Abstract

A central issue in the study of music-related gestures, both those of performers and those of listeners, is how we segment the stream of human movement and of sounds into somehow perceptually meaningful chunks. In this paper, we shall on the background of our ongoing work in the Sensing Music-Related Actions project present a model for chunking musicrelated gestures based on coarticulation. We can define coarticulation as the fusion of otherwise distinct events, meaning both action events and sound events, into larger and holistically perceived chunks, e.g. as in the fusion of syllables into continuous articulatory movement and sound in language, or in the fusion of singular tone events into continuous soundproducing gestures and melodic or textural patterns in music. It is our hypothesis that coarticulation plays a crucial role in both the production and in the perception of music-related gesture chunks, and the aim of this paper is to present evidence for this based on recent research, including our own findings. Coarticulation is a much-discussed topic in linguistics (Hardcastle and Hewlett 1999), but can also be encountered in other contexts such as in human movement science (Rosenbaum 1991). In music research, there are but a few studies of coarticulation, such as in piano playing (Engel, Flanders, and Soechting 1997) and violin playing (Wiesendanger, Baader, and Kazennikov 2006), but we believe coarticulation could be a very fruitful concept in music-related gesture research because it can account for the emergence of meaningful chunks on the basis of combined biomechanical and motor-control constraints, constraints that in turn also affect the perception of music-related gestures. In sound-producing gestures, there are biomechanical constraints such as in the need for moving effectors, e.g. fingers on a keyboard, to optimal positions before producing tones, hence of including singular key pressing finger movements into more superordinate trajectories of hand, arm, shoulder, and even torso movements for optimal execution, fluency, and energy-conservation as well as avoiding strain injury. There are also motor-control constraints at work in the need for planning fast movements in advance, i.e. envisage the entire sequence of individual tone-productions as one superordinate gesture. On the perceptual side, this coarticulatory inclusion of singular tone events affects both the perceived kinematics, i.e. the visual image of sound-producing gestures, and the perceived sound, i.e. the sequence of tones fused into a more superordinate gestalt. Such coarticulation is often reflected in sound-accompanying gestures that listeners make in dance or everyday listening situations. Various cognitive science and human movement science findings seem to converge on a roughly 0,5, to 5 seconds duration range as optimal for what is perceived as meaningful

Cite this paper

@inproceedings{Gody2008ChunkingBC, title={Chunking by coarticulation in Music-Related Gestures}, author={Rolf Inge God\oy and Alexander Refsum Jensnius and Kristian Nymoen}, year={2008} }