• Corpus ID: 221319709

The Freesound Loop Dataset and Annotation Tool

@article{Ramires2020TheFL,
  title={The Freesound Loop Dataset and Annotation Tool},
  author={Ant{\'o}nio Ramires and Frederic Font and Dmitry Bogdanov and Jordan B. L. Smith and Yi-Hsuan Yang and Joann Ching and Bo-Yu Chen and Yueh-Kao Wu and Hsu Wei-Han and Xavier Serra},
  journal={ArXiv},
  year={2020},
  volume={abs/2008.11507}
}
Music loops are essential ingredients in electronic music production, and there is a high demand for pre-recorded loops in a variety of styles. Several commercial and community databases have been created to meet this demand, but most are not suitable for research due to their strict licensing. We present the Freesound Loop Dataset (FSLD), a new large-scale dataset of music loops annotated by experts. The loops originate from Freesound, a community database of audio recordings released under… 

Figures and Tables from this paper

A Benchmarking Initiative for Audio-Domain Music Generation Using the Freesound Loop Dataset

This paper proposes a new benchmark task for generating musical passages in the audio domain by using the drum loops from the FreeSound Loop Dataset, which are publicly re-distributable, and benchmark the performance of three recent deep generative adversarial network models the authors customize to generate loops, including StyleGAN, StyleGAN2, and UNAGAN.

Extreme Audio Time Stretching Using Neural Synthesis

A deep neural network solution for time-scale modification (TSM) focused on large stretching factors is proposed, targeting environmental sounds. Traditional TSM artifacts such as transient smearing,

Comparision Of Adversarial And Non-Adversarial LSTM Music Generative Models

This work implements and compares adversarial and non-adversarial training of recurrent neural network music composers on MIDI data, and evaluation indicates that adversarial training produces more aesthetically pleasing music.

Exploiting Pre-trained Feature Networks for Generative Adversarial Networks in Audio-domain Loop Generation

Evaluating the performance of a StyleGAN2-based audio-domain loop generation model with and without using a pre-trained feature space in the discriminator shows that a general audio classifier works better, and that with Projected GAN the authors' loop generation models can converge around 5 times faster without performance degradation.

A NOVEL DATASET FOR TIME-DEPENDENT HARMONIC SIMILARITY BETWEEN CHORD SEQUENCES

The results show that a convolutional neural network (CNN), which considers the temporal context of a chord progression, outperforms a simpler approach based on temporal averaging of input features.

Instrument Role Classification: Auto-tagging for Loop Based Music

A new type of auto-tagging task, called “instrument role classification,” is introduced, and the performance of both neural network and non-neural network based multi-label classification models for six instrument roles is benchmarked.

References

SHOWING 1-10 OF 29 REFERENCES

Tonal Description of Polyphonic Audio for Music Content Processing

  • E. Gómez
  • Computer Science
    INFORMS J. Comput.
  • 2006
A method to extract a description of the tonal aspects of music from polyphonic audio signals using different levels of abstraction, differentiating between low-level signal descriptors and high-level textual labels.

Tempo Estimation for Music Loops and a Simple Confidence Measure

Comunicacio presentada a la 17th International Society for Music Information Retrieval Conference (ISMIR 2016), celebrada els dies 7 a 11 d'agost de 2016 a Nova York, EUA.

Freesound technical demo

This demo wants to introduce Freesound to the multimedia community and show its potential as a research resource.

Key Estimation in Electronic Dance Music

This paper defines notions of tonality and key before outlining the basic architecture of a template-based key estimation method, and reports on the tonal characteristics of electronic dance music, in order to infer possible modifications of the method described.

Neural Loop Combiner: Neural Network Models for Assessing the Compatibility of Loops

This work extracts loops from existing music to obtain positive examples of compatible loops, and proposes and compare various strategies for choosing negative examples, and investigates two types of model architectures for estimating the compatibility of loops based on a Siamese network and a pure convolutional neural network.

Loops as Genre Resources

Abstract The audio loop is both shaped by, and shaping of, the medium resources, mode conventions, and genre practices associated with sound design and music production. Digital technologies have

Cognitive Foundations of Musical Pitch

1. Objectives and Methods 2. Quantifying Tonal Hierarchies and Key Distances 3. Musical Correlates of Perceived Tonal Hierarchies 4. A Key-Finding Algorithm Based on Tonal Hierarchies 5. Perceived

Signal Processing Parameters for Tonality Estimation

Algorithms and representations for supporting online music creation with large-scale audio databases

La rapida adopcion de Internet y de las tecnologias web ha creado una oportunidad para hacer musica colaborativa mediante el intercambio de informacion en linea. Sin embargo, las aplicaciones