• Corpus ID: 53873624

Concert Stitch: Organization and Synchronization of Crowd Sourced Recordings

  title={Concert Stitch: Organization and Synchronization of Crowd Sourced Recordings},
  author={Vinod Subramanian and Alexander Lerch},
The number of audience recordings of concerts on the internet has exploded with the advent of smartphones. This paper proposes a method to organize and align these recordings in order to create one or more complete renderings of the concert. The process comprises two steps: first, using audio fingerprints to represent the recordings, identify overlapping segments, and compute an approximate alignment using a modified Dynamic Time Warping (DTW) algorithm and second, applying a cross-correlation… 

Figures and Tables from this paper

Alignment and Timeline Construction for Incomplete Analogue Audience Recordings of Historical Live Music Concerts
This paper proposes a method to align multiple digitised analogue recordings of same concerts of varying quality and song segmentations, and evaluates alignment methods on a synthetic dataset and applies the algorithm to real-world data.


Synchronizing multimodal recordings using audio-to-audio alignment
A low cost approach is proposed to synchronize streams by embedding ambient audio into each data-stream, which effectively reduces the synchronization problem to audio-to-audio alignment.
Evaluation of Features for Audio-to-Audio Alignment
A new method for the objective evaluation of audio-to-audio alignment systems is proposed that enables the use of arbitrary kinds of music as ground truth data and showed that the feature weighting algorithm could improve the alignment accuracies compared to the results of the individual features.
Less talk, more rock: automated organization of community-contributed collections of concert videos
The timing and link structure generated by the synchronization algorithm is used to improve the findability and representation of the event content, including identifying key moments of interest and descriptive text for important captured segments of the show.
Automatic mashup generation from multiple-camera concert recordings
A novel virtual director system that automatically combines the most desirable segments from different recordings resulting in a single video stream, called mashup, which is compared with two other mashups created by a professional video editor and machine generated by random segment selection.
Clustering and synchronizing multi-camera video via landmark cross-correlation
Improvements towards event identification and a new synchronization refinement method that resolves inconsistent estimates and allows non-overlapping content to be synchronized within larger groups of recordings are offered.
Audio fingerprinting to identify multiple videos of an event
A robust fingerprinting strategy is explored to obtain a sparse set of the most prominent elements in a video soundtrack, and reliable matching of identical events in different recordings is demonstrated, even under difficult conditions.
Automatic Sample Detection in Polyphonic Music
A method based on Non-negative Matrix Factorization (NMF) and Dynamic Time Warping (DTW) for the automatic detection of a sample in a pool of songs and able to identify samples that are pitch shifted and/or time stretched with approximately 63% F-measure.
Grateful Live: Mixing Multiple Recordings of a Dead Performance into an Immersive Experience
A system that automatically aligns and clusters live music recordings based on various audio characteristics and editorial metadata creates an immersive virtual space that can be imported into a multichannel web or mobile application allowing listeners to navigate the space using interface controls or mobile device sensors.
The Audio Degradation Toolbox and Its Application to Robustness Evaluation
It is demonstrated that specific degradations can reduce or even reverse the performance difference between two competing methods, and it is shown that performance strongly depends on the combination of method and degradation applied.
Audio thumbnailing of popular music using chroma-based representations
This work presents a system for producing short, representative samples (or "audio thumbnails") of selections of popular music, and presents a development of the chromagram, a variation on traditional time-frequency distributions that seeks to represent the cyclic attribute of pitch perception, known as chroma.