• Corpus ID: 49253389

Live Repurposing of Sounds: MIR Explorations with Personal and Crowdsourced Databases

  title={Live Repurposing of Sounds: MIR Explorations with Personal and Crowdsourced Databases},
  author={Anna Xamb{\'o} and Gerard Roma and Alexander Lerch and Mathieu Barthet and Gy{\"o}rgy Fazekas},
The recent increase in the accessibility and size of personal and crowdsourced digital sound collections brought about a valuable resource for music creation. [] Key Method The novelty of our approach lies in exploiting high-level MIR methods (e.g., query by pitch or rhythmic cues) using live coding techniques applied to sounds. We demonstrate its potential through the reflection of an illustrative case study and the feedback from four expert users. The users tried the system with either a personal database…

Figures from this paper

Leveraging Online Audio Commons Content for Media Production
With the advent of online audio resources and web technologies, digital tools for sound designers and music producers are changing. The Internet provides access to hundreds of thousands of digital
Live Coding with the Cloud and a Virtual Agent
A machine learning (ML) model is introduced that, based on a set of examples provided by the live coder, filters the crowdsourced sounds retrieved from the Freesound online database at performance time.
Music Information Retrieval in Live Coding: A Theoretical Framework
It is found that it is still a technical challenge to use high-level features in real time, yet using rhythmic and tonal properties (midlevel features) in combination with text-based information (e.g., tags) helps to achieve a closer perceptual level centered on pitch and rhythm when using MIR in live coding.
Playsound.space: Improvising in the browser with semantic sound objects
This paper describes the development and evaluation of the online music making tool Playsound.space and provides directions for future artistic and pedagogical applications that can benefit the design of other ubiquitous music systems.
Voice-based interface for accessible soundscape composition: composing soundscapes by vocally querying online sounds repositories
An Internet of Audio Things ecosystem devised to support soundscape composition via vocal interactions that involves a commercial voice-based interface and the cloud-based repository of audio content Freesound.org is presented.
Jamming with a Smart Mandolin and Freesound-based Accompaniment
Two use cases investigating how audio content retrieved from Freesound can be leveraged by performers or audiences to produce accompanying soundtracks for music performance with a smart mandolin are presented.
Interdisciplinary Research as Musical Experimentation: A case study in musicianly approaches to sound corpora
This paper frames the thinking about interdisciplinarity in Electroacoustic Music Studies by considering interdiscipline in electroacoustic music studies, before proceeding to apply this to a specific project in terms of practice-led design.
The Internet of Audio Things: State of the Art, Vision, and Challenges
The state of the art of this field is reviewed, then a vision for the IoAuT is presented, which enables the connection of digital and physical domains by means of appropriate information and communication technologies, fostering novel applications and services based on auditory information.
Deliverable D 6 . 12 Report on the evaluation of the ACE from a holistic and technological perspective
The QMUL-funded AudioCommons project aims to create an ecosystem for Creative Reuse of Audio Content and provide a platform for non-commercial reuse of audio content for educational and commercial purposes.


Freesound 2: An Improved Platform for Sharing Audio Clips
Freesound.org is an online collaborative sound database where people from different disciplines share recorded sound clips under Creative Commons licenses. It was started in 2005 and it is being
Collaborative Textual Improvisation in a Laptop Ensemble
This work believes that the dynamics of ensemble performance can lead laptop musicians in new creative directions, pushing them towards more real-time creativity and combining the diverse skills, ideas, and schemas of the ensemble’s members to create unexpected, novel music in performance.
Sound recycling from public databases: Another BigData approach to sound collections
Among several different distributed systems useful for music experimentation, a new workflow is proposed based on analysis techniques from Music Information Retrieval combined with massive online databases, dynamic user interfaces, physical controllers and real-time synthesis, keeping in mind compositional concepts and focusing on artistic performances.
Audio Commons: bringing Creative Commons audio content to the creative industries
The Audio Commons Initiative is presented, which is aimed at promoting the use of open audio content and at developing technologies with which to support the ecosystem composed by content repositories, production tools and users.
Sound Sharing and Retrieval
This chapter describes how to build an audio database by outlining different aspects to be taken into account and discusses metadata-based descriptions of audio content and different searching and browsing techniques that can be used to navigate the database.
Live coding youtube: organizing streaming media for an audiovisual performance
The challenges of using streaming videos from the platform as musical materials in live music performance are discussed and a live coding environment that is developed for real-time improvisation is introduced.
Automatic sound annotation
  • P. Cano, M. Koppenberger
  • Computer Science
    Proceedings of the 2004 14th IEEE Signal Processing Society Workshop Machine Learning for Signal Processing, 2004.
  • 2004
A nearest-neighbor classifier with a database of isolated sounds unambiguously linked to WordNet concepts, a semantic network that organizes real world knowledge, is used to overcome the need of a huge number of classifiers to distinguish many different sound classes.
Corpus-Based Concatenative Synthesis
An overview of the components needed for corpus-based concatenative synthesis of musical sound synthesis is given and details of some realizations are given.
Music performance by discovering community loops
A system for exploring loops from Freesound that can be used as a musical instrument: since sounds will always play in sync, the user can freely explore the variety of sounds uploaded by theFreesound community, while continuously producing a rhythmic music stream.
This work proposes a sequence generation mechanism called musical mosaicing, which enables to generate automatically sequences of sound samples by specifying only high-level properties of the sequence to generate, and is able to scale up on databases containing more than 100.000 samples.