Sonic visualiser: an open source application for viewing, analysing, and annotating music audio files

  title={Sonic visualiser: an open source application for viewing, analysing, and annotating music audio files},
  author={Chris Cannam and Christian Landone and Mark B. Sandler},
  journal={Proceedings of the 18th ACM international conference on Multimedia},
Sonic Visualiser is a friendly and flexible end-user desktop application for analysis, visualisation, and annotation of music audio files. Its stated goal is to be "the first program you reach for when want to study a musical recording rather than simply listen to it". To this end, it has a user interface that resembles familiar audio editing applications, a set of useful standard visualisation facilities, and support for a plugin format for additional automated analysis methods. 

Figures from this paper

The Interpersonal Entrainment in Music Performance Data Collection
The main features of the data collection and the methods used in its preparation are introduced and contextualize the collection in relation to developments in Open Science and Open Data, discussing important distinctions between the two related concepts.
Towards Automatically Correcting Tapped Beat Annotations for Music Recordings
This work formalizes the automated correction procedure mathematically, introduces a novel visualization method that serves as a tool to analyze the results of the correction procedure for potential errors, and presents a new dataset consisting of beat annotations for 101 music recordings.
Automated Representations of Temporal Aspects of Elec-troacoustic Music: Recent Experiments Using Perceptual Models
The multi-granular approach outlined by Lartillot et al., combined with the use of MFCCs, is a very efficient and salient segmentation strategy for music structured predominantly according to timbre.
The presented sample library of violin sounds is designed as a tool for the research, development and testing of sound analysis/synthesis algorithms. The library features single sounds which cover
MetaFlute: Developing Tools to Observe Micro-Timing and Physical Movements in Flute Playing
New methods to acquire information about observable aspects of flute playing are developed, which result in the ability to more accurately assess note onset and to track various physical motions and movements of a flute player.
In Search of Automatic Rhythm Analysis Methods for Turkish and Indian Art Music
This work defines and describes three relevant rhythm annotation tasks for these cultures—beat tracking, meter estimation, and downbeat detection, and evaluates several methodologies from the state of the art in Music Information Retrieval for these tasks, using manually annotated datasets of Turkish and Indian music.
Facilitating Music Information Research with Shared Open Vocabularies
A software framework which combines this ontology and related Semantic Web technologies with data extraction and analysis software, in order to enhance audio feature extraction workflows is demonstrated.
Automated identification of chicken distress vocalisations using deep learning models
A novel light-VGG11 was developed to automatically identify chicken distress calls using recordings collected on intensive chicken farms and the impacts of different data augmentation techniques were investigated and found that they could improve distress calls detection by up to 1.52%.
Dagstuhl ChoirSet: A Multitrack Dataset for MIR Research on Choral Singing
Detailed insights are given into all stages of creating Dagstuhl ChoirSet (DCS), a multitrack dataset of a cappella choral music designed to support MIR research on choral singing.
Erkomaishvili Dataset: A Curated Corpus of Traditional Georgian Vocal Music for Computational Musicology
A curated dataset of traditional Georgian vocal music for computational musicology based on historic tape recordings of three-voice Georgian songs performed by the former master chanter Artem Erkomaishvili is presented.