• Corpus ID: 17213221

Computationally Created Soundscapes with Audio Metaphor

  title={Computationally Created Soundscapes with Audio Metaphor},
  author={Miles Thorogood and Philippe Pasquier},
Soundscape composition is the creative practice of processing and combining sound recordings to evoke auditory associations and memories within a listener. [] Key Method We used a simple natural language processing to create audio file search queries, and we segmented and classified audio files based on general soundscape composition categories. We used our prototype implementation of Audio Metaphor in two performances, seeding the system with

Figures and Tables from this paper

Voice-based interface for accessible soundscape composition: composing soundscapes by vocally querying online sounds repositories
An Internet of Audio Things ecosystem devised to support soundscape composition via vocal interactions that involves a commercial voice-based interface and the cloud-based repository of audio content Freesound.org is presented.
A framework for computer-assisted sound design systems supported by modelling affective and perceptual properties of soundscape
A system called Audio Metaphor is outlined that is built upon the notion that sound design for soundscape compositions is emotionally informed and is revealed to be human-competitive regarding semantic and emotion-based indicators.
Impress: A Machine Learning Approach to Soundscape Affect Classification for a Music Performance Environment
A new system called Impress is presented that uses supervised machine learning for the acquisition and realtime feedback of soundscape a↵ect, and an audio features vector of audio descriptors was used to represent an audio signal for fitting multiple regression models to predictsoundscape a ↵ect.
Algorithmic Audio Mashups and Synthetic Soundscapes Employing Evolvable Media Repositories
The paper describes a metacreative system for real time algorithmic composition of audio mashups and synthetic soundscapes that pivots on evolvable media repositories, i.e., local pools of related
Emo-soundscapes: A dataset for soundscape emotion recognition
A dataset of audio samples called Emo-Soundscapes and two evaluation protocols for machine learning models to benchmark SER are proposed and how the mixing of various soundscape recordings influences their perceived emotion is studied.
Narrative-inspired Generation of Ambient Music
This paper explores one example of how a computational system might rely on what they have learned from analyzing another distinct form of expression to produce creative work.
Automatic Soundscape Affect Recognition Using A Dimensional Approach
A method for the automatic soundscape affect recognition using ground truth data collected from an online survey and a gold standard is presented, which shows that participants have a high level of agreement on the valence and arousal of soundscapes.
Automatic Recognition of Eventfulness and Pleasantness of Soundscape
A gold standard for soundscape affect recognition is generated by averaging responses from people provided people agreed with each other enough and the correlation between the level of pleasantness and thelevel of eventfulness is tested based upon the gold standard.
Soundscape emotions categorization and readjustment based on music acoustical parameters
This study presents an approach to analyse the inherent emotional ingredients in the polyphonic music signals, and applied to the soundscape emotion analysis, and evaluated the effectiveness of emotion locus variation of selected urban soundscape sets blending with music signals.
Emotional quantification of soundscapes by learning between samples
This work presents the design of two convolutional neural networks, namely ArNet and ValNet, each one responsible for quantifying arousal and valence evoked by soundscapes, and designs a suitable deep learning framework.


Audio Metaphor: Audio Information Retrieval for Soundscape Composition
Audio Metaphor facilitated audience interaction by listening for Tweets that the audience addressed to the performance; in this case, it processed the Twitter feed in realtime to recommend audio files to the soundscape composer.
Design of a Generative Model for Soundscape Creation
This paper describes the design and preliminary implementation, of a generative model for dynamic, real time soundscape creation and outlines extensions to the model that include interaction paradigms, context modeling, sound acquisition, and sound synthesis.
Soundscape Generation for Virtual Environments using Community-Provided Audio Databases
The design methodology incorporates the use of concatenative synthesis to construct a sound environment using online community-provided sonic material, and an application is described in which sound environments are generated for Google Street View using the online sound database Freesound.
Negotiated Content: Generative Soundscape Composition by Autonomous Musical Agents in Coming Together: Freesound
This work presents a system – Coming Together: Freesound – in which four autonomous artificial agents choose sounds from a large pre-analyzed database of soundscape recordings (from freesound.org), based upon their spectral content and metadata tags.
Authoring augmented soundscapes with user-contributed content
A complete augmented soundscapes system that, in an autonomous and continuous manner, spatializes virtual acoustic sources in a geographic location and combines the traditional text-query with content-based audio classification.
Sonic Experience: A Guide to Everyday Sounds
Le repertoire des effets sonores en anglais. Never before has the everyday soundtrack of urban space been so cacophonous. Since the 1970s, sound researchers have attempted to classify noise, music,
Understanding urban and natural soundscapes
The concept of soundscape has garnered increasing research attention over the last decade for studying and designing the sonic environment of public spaces. It is therefore critical to advance
In search for soundscape indicators : Physical descriptions of semantic categories
We present converging evidence that people categorize urban soundscapes into semantic categories related to social activities. Examples of such categories are spontaneously described are « markets»,
Freesound 2: An Improved Platform for Sharing Audio Clips
Freesound.org is an online collaborative sound database where people from different disciplines share recorded sound clips under Creative Commons licenses. It was started in 2005 and it is being
Automatic audio segmentation using a measure of audio novelty
  • J. Foote
  • Computer Science
    2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532)
  • 2000
This method can find individual note boundaries or even natural segment boundaries such as verse/chorus or speech/music transitions, even in the absence of cues such as silence, by analyzing local self-similarity.