• Corpus ID: 237385495

Controllable deep melody generation via hierarchical music structure representation

@inproceedings{Dai2021ControllableDM,
  title={Controllable deep melody generation via hierarchical music structure representation},
  author={Shuqi Dai and Zeyu Jin and Celso Gomes and Roger B. Dannenberg},
  booktitle={ISMIR},
  year={2021}
}
Recent advances in deep learning have expanded possibilities to generate music, but generating a customizable full piece of music with consistent long-term structure remains a challenge. This paper introduces MusicFrameworks, a hierarchical music structure representation and a multi-step generative process to create a full-length melody guided by long-term repetitive structure, chord, melodic contour, and rhythm constraints. We first organize the full melody with section and phrase-level… 

Figures and Tables from this paper

MELONS: generating melody with long-term structure using transformers and structure graph
TLDR
MELONS is proposed, a melody generation framework based on a graph representation of music structure which consists of eight types of bar-level relations which can produce structured melodies with high quality and rich contents.
Theme Transformer: Symbolic Music Generation with Theme-Conditioned Transformer
TLDR
An alternative conditioning approach is proposed that explicitly trains the Transformer to treat the conditioning sequence as a thematic material that has to manifest itself multiple times in its generation result, and can generate polyphonic pop piano music with repetition and plausible variations of a given condition.
The Power of Reuse: A Multi-Scale Transformer Model for Structural Dynamic Segmentation in Symbolic Music Generation
TLDR
This paper proposes a multi-scale Transformer, which uses coarse-decoder and fine-decoders to model the contexts at the global and section-level, respectively, and demonstrates that the model outperforms the best con-temporary symbolic music generative models.

References

SHOWING 1-10 OF 56 REFERENCES
Personalized Popular Music Generation Using Imitation and Structure
TLDR
A statistical machine learning model is proposed that is able to capture and imitate the structure, melody, chord, and bass style from a given example seed song.
Pop Music Transformer: Generating Music with Rhythm and Harmony
TLDR
This paper builds a Pop Music Transformer that composes Pop piano music with a more plausible rhythmic structure than prior arts do and introduces a new event set, dubbed "REMI" (REvamped MIDI-derived events), which provides sequence models a metric context for modeling the rhythmic patterns of music.
POP909: A Pop-song Dataset for Music Arrangement Generation
TLDR
POP909, a dataset which contains multiple versions of the piano arrangements of 909 popular songs created by professional musicians, and provides the annotations of tempo, beat, key, and chords, where the tempo curves are hand-labeled and others are done by MIR algorithms.
Music SketchNet: Controllable Music Generation via Factorized Representations of Pitch and Rhythm
TLDR
Music SketchNet, a neural network framework that allows users to specify partial musical ideas guiding automatic music generation, is proposed, and it is demonstrated that the model can successfully incorporate user-specified snippets during the generation process.
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment
TLDR
Three models for symbolic multi-track music generation under the framework of generative adversarial networks (GANs) are proposed, referred to as the jamming model, the composer model and the hybrid model, which can generate coherent music of four bars right from scratch.
Music Transformer
TLDR
It is demonstrated that a Transformer with the modified relative attention mechanism can generate minute-long compositions with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies.
Learning Interpretable Representation for Controllable Polyphonic Music Generation
TLDR
This work designs a novel architecture that effectively learns two interpretable latent factors of polyphonic music: chord and texture, and shows that such chord-texture disentanglement provides a controllable generation pathway leading to a wide spectrum of applications, including compositional style transfer, texture variation, and accompaniment arrangement.
Attributes-Aware Deep Music Transformation
TLDR
This work proposes a novel method that enables attributes-aware music transformation from any set of musical annotations, without requiring complicated derivative implementation, and can provide explicit control over any continuous or discrete annotation.
Deep Music Analogy Via Latent Representation Disentanglement
TLDR
An explicitly-constrained variational autoencoder (EC$^2$-VAE) is contributed as a unified solution to all three sub-problems of disentangling music representations and is validated using objective measurements and evaluated by a subjective study.
...
...