Corpus ID: 7908391

Towards Automated Extraction of Tempo Parameters from Expressive Music Recordings

@inproceedings{Mller2009TowardsAE,
  title={Towards Automated Extraction of Tempo Parameters from Expressive Music Recordings},
  author={Meinard M{\"u}ller and Verena Konz and A. Scharfstein and S. Ewert and M. Clausen},
  booktitle={ISMIR},
  year={2009}
}
A performance of a piece of music heavily depends on the musician’s or conductor’s individual vision and personal interpretation of the given musical score. As basis for the analysis of artistic idiosyncrasies, one requires accurate annotations that reveal the exact timing and intensity of the various note events occurring in the performances. In the case of audio recordings, this annotation is often done manually, which is prohibitive in view of large music collections. In this paper, we… Expand

Figures, Tables, and Topics from this paper

Automated analysis of performance variations in folk song recordings
TLDR
The concept of chroma templates is introduced by which consistent and inconsistent aspects across the various stanzas of a recorded song are captured in the form of an explicit and semantically interpretable matrix representation. Expand
Automated methods for audio-based music analysis with applications to musicology
TLDR
This thesis presents several automated methods for music analysis, which are motivated by concrete application scenarios being of central importance in musicology, and introduces novel interdisciplinary concepts which facilitate the collaboration between computer scientists and musicologists. Expand
Signal processing methods for beat tracking, music segmentation, and audio retrieval
TLDR
Novel signal processing methods that allow for extracting musically meaningful information from audio signals are introduced and a cross-version approach to content-based music retrieval based on the query-by-example paradigm is explored. Expand
Compensating for asynchronies between musical voices in score-performance alignment
  • Siying Wang, S. Ewert, S. Dixon
  • Computer Science
  • 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2015
TLDR
This paper presents a novel method that handles asynchronies between the melody and the accompaniment by treating the voices as separate time lines in a multi-dimensional variant of dynamic time warping (DTW). Expand
Decoding Tempo and Timing Variations in Music Recordings from Beat Annotations
TLDR
A more formal method for calculation of the optimal tempo path through use of an appropriate cost function that incorporates tempo change, phase shift and expressive timing is proposed. Expand
MTD: A Multimodal Dataset of Musical Themes for MIR Research
TLDR
The Musical Theme Dataset (MTD) is presented, a multimodal dataset inspired by “A Dictionary of Musical Themes” by Barlow and Morgenstern from 1948 that is of relevance for various subfields and tasks in MIR, such as cross-modal music retrieval, music alignment, optical music recognition, music transcription, and computational musicology. Expand
Robust and Efficient Joint Alignment of Multiple Musical Performances
TLDR
This work exploits the availability of multiple versions of the piece to be aligned to improve the alignment accuracy and robustness over comparable pairwise methods, and presents two such joint alignment methods, progressive alignment and probabilistic profile. Expand
SIMPLE TEMPO MODELS FOR REAL-TIME MUSIC TRACKING
The paper describes a simple but effective method for incorporating automatically learned tempo models into realtime music tracking systems. In particular, instead of training our system withExpand
Signal processing methods for music synchronization, audio matching, and source separation
TLDR
This thesis presents novel, content-based methods for music synchronization, audio matching, and source separation, and describes a novel procedure for making chroma features even more robust to changes in timbre while keeping their discriminative power. Expand
Computational methods for the alignment and score-informed transcription of piano music
TLDR
This thesis presents a score to performance alignment method that can improve the robustness in cases where some musical voices, such as the melody, are played asynchronously to others – a stylistic device used in musical expression. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 16 REFERENCES
Automatic Extraction of Tempo and Beat From Expressive Performances
TLDR
It is shown that estimating the perceptual salience of rhythmic events significantly improves the results of a computer program which is able to estimate the tempo and the times of musical beats in expressively performed music. Expand
From Time to Time: The Representation of Timing and Tempo
  • H. Honing
  • Computer Science
  • Computer Music Journal
  • 2001
Timing plays an important role in the performance and appreciation of almost all types of music. It has been studied extensively in music perception and music performance research (see Palmer 1997Expand
Comparative Analysis of Multiple Musical Performances
TLDR
A technique for comparing numerous performances of an identical selection of music and selecting the best correlated performances for the summary display is described, which is a useful navigational aid for coping with large numbers of performances of the same piece of music. Expand
High resolution audio synchronization using chroma onset features
TLDR
Novel audio features that combine the high temporal accuracy of onset features with the robustness of chroma features are introduced and it is shown how previous synchronization methods can be extended to make use of these new features. Expand
Tempo and beat analysis of acoustic musical signals.
  • E. D. Scheirer
  • Computer Science, Medicine
  • The Journal of the Acoustical Society of America
  • 1998
TLDR
A method is presented for using a small number of bandpass filters and banks of parallel comb filters to analyze the tempo of, and extract the beat from, musical signals of arbitrary polyphonic complexity and containing arbitrary timbres that can be used predictively to guess when beats will occur in the future. Expand
Visualizing Expressive Performance in TempoLoudness Space
TLDR
An integrated analysis technique in which tempo and loudness are processed and displayed at the same time and allowed to study interactions between these two parameters by themselves or with respect to properties of the musical score is developed. Expand
A tutorial on onset detection in music signals
TLDR
Methods based on the use of explicitly predefined signal features: the signal's amplitude envelope, spectral magnitudes and phases, time-frequency representations, and methods based on probabilistic signal models are discussed. Expand
Polyphonic audio matching and alignment for music retrieval
We describe a method that aligns polyphonic audio recordings of music to symbolic score information in standard MIDI files without the difficult process of polyphonic transcription. By using thisExpand
Information retrieval for music and motion
TLDR
Analysis and Retrieval Techniques for Music Data, SyncPlayer: An Advanced Audio Player, and Relational Features and Adaptive Segmentation. Expand
In Search of the Horowitz Factor
TLDR
A broad view of the discovery process is given, from data acquisition through data visualization to inductive model building and pattern discovery, and it turns out that AI plays an important role in all stages of such an ambitious enterprise. Expand
...
1
2
...