• Corpus ID: 8018732

Towards Mapping Timbre to Emotional Affect

  title={Towards Mapping Timbre to Emotional Affect},
  author={Niklas Kl{\"u}gel and Georg Groh},
Controlling the timbre generated by an audio synthesizer in a goal-oriented way requires a profound understanding of the synthesizer’s manifold structural parameters. Especially shaping timbre expressively to communicate emotional affect requires expertise. Therefore, novices in particular may not be able to adequately control timbre in view of articulating the wealth of affects musically. In this context, the focus of this paper is the development of a model that can represent a relationship… 

Figures and Tables from this paper

Collaborative Music-Making with Interactive Tabletops
Few IT-systems focus on collaboration in electronic music-making, despite evidence that musical engagement is embedded deeply into socio-cultural and collaborative constructs. Hence, certain
Designing Sound Collaboratively Perceptually Motivated Audio Synthesis
A machine learning method is used to generate a mapping from perceptual audio features to synthesis parameters, which is then used for visualization and interaction in a prototype that allows a group of users to design sound collaboratively in real time using a multi-touch tabletop.
Developing methods for predicting affect in algorithmic composition
The author states that the use of acronyms and jargon in this work has changed in recent years from being purely descriptive to informative and grammarian in nature.


An exploration of musical communication through expressive use of timbre: The performer’s perspective
This study explores the sound world of the performer, building on increasing evidence that timbre is the most salient variable performance parameter and can also be the primary source of inspiration
Emotional Responses to the Perceptual Dimensions of Timbre: A Pilot Study Using Physically Informed Sound Synthesis
Music is well known for a↵ecting human emotional states, and most people enjoy music because of the emotions it evokes. Yet, the relationship between specific musical parameters and emotional
Timbre and Affect Dimensions: Evidence from Affect and Similarity Ratings and Acoustic Correlates of Isolated Instrument Sounds
considerable effort has been made towards understanding how acoustic and structural features contribute to emotional expression in music, but relatively little attention has been paid to the role of
Predicting Time-Varying Musical Emotion Distributions from Multi-Track Audio
Following from the time-varying nature of music, 30-second clips on one-second intervals are analyzed, in- vestigating several regression techniques for the automatic parameteriza- tion of emotion-space distributions from acoustic data.
A musical system for emotional expression
Feature Learning in Dynamic Environments: Modeling the Acoustic Structure of Musical Emotion
This work seeks to employ regression-based deep belief networks to learn features directly from magnitude spectra as a basis for feature learning, taking into account the dynamic nature of music.
This paper surveys the state of the art in automatic emotion recognition in music. Music is oftentimes referred to as a “language of emotion” [1], and it is natural for us to categorize music in
Emotional responses to music: the need to consider underlying mechanisms.
It is concluded that music evokes emotions through mechanisms that are not unique to music, and that the study of musical emotions could benefit the emotion field as a whole by providing novel paradigms for emotion induction.
Evaluation of Musical Features for Emotion Classification
It is found that spectral features outperform those based on rhythm, dynamics, and, to a lesser extent, harmony, and that the fusion of different feature sets does not always lead to improved classification.
Changes in emotional tone and instrumental timbre are reflected by the mismatch negativity.