• Corpus ID: 52220918

Part-invariant Model for Music Generation and Harmonization

@inproceedings{Yan2018PartinvariantMF,
  title={Part-invariant Model for Music Generation and Harmonization},
  author={Yujia Yan and Ethan Lustig and Joseph VanderStel and Zhiyao Duan},
  booktitle={ISMIR},
  year={2018}
}
Automatic music generation has been gaining more attention in recent years. Existing approaches, however, are mostly ad hoc to specific rhythmic structures or instrumentation layouts, and lack music-theoretic rigor in their evaluations. In this paper, we present a neural language (music) model that tries to model symbolic multi-part music. Our model is part-invariant, i.e., it can process/generate any part (voice) of a music score consisting of an arbitrary number of parts, using a single… 

Figures from this paper

MuseBERT: Pre-training Music Representation for Music Understanding and Controllable Generation

TLDR
MuseBERT is proposed and it is shown that the pre-trained model outperforms the baselines in terms of reconstruction likelihood and generation quality and gives birth to various downstream music generation and analysis tasks with practical value.

CollageNet: Fusing arbitrary melody and accompaniment into a coherent song

TLDR
A new task in the symbolic music domain that is similar to the music sampling practice and a neural network model named CollageNet is proposed to fulfill this task, which achieves significantly higher level of harmony than rule-based and data-driven baseline methods.

A Comprehensive Survey on Deep Music Generation: Multi-level Representations, Algorithms, Evaluations, and Future Directions

TLDR
This paper attempts to provide an overview of various composition tasks under different music generation levels, covering most of the currently popular music generation tasks using deep learning.

A-Muze-Net: Music Generation by Composing the Harmony based on the Generated Melody

TLDR
A method for the generation of Midi files of piano music that models the right and left hands using two networks, where the left hand is conditioned on the right hand, and the melody is generated before the harmony.

LakhNES: Improving Multi-instrumental Music Generation with Cross-domain Pre-training

TLDR
To improve the performance of the Transformer architecture, this work proposes a pre-training technique to leverage the information in a large collection of heterogeneous music, namely the Lakh MIDI dataset, and finds that this transfer learning procedure improves both quantitative and qualitative performance for the primary task.

RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement Learning

TLDR
A deep reinforcement learning algorithm for online accompaniment generation, with potential for real-time interactive human-machine duet improvisation, based on a well-functioning reward model that considers the compatibility of the machine-generated note with both the machine's context and the human-generated context.

BachDuet: A Deep Learning System for Human-Machine Counterpoint Improvisation

TLDR
B BachDuet is developed, a system that enables real-time counterpoint improvisation between a human and a machine and uses a recurrent neural network to process the human musician’s monophonic performance on a MIDI keyboard and generates the machine's monophonics performance in real time.

MG-VAE: Deep Chinese Folk Songs Generation with Specific Regional Style

TLDR
MG-VAE is proposed, a music generative model based on VAE (Variational Auto-Encoder) that is capable of capturing specific music style and generating novel tunes for Chinese folk songs in a manipulatable way and is able to create novel folk songs with controllable regional styles.

MIDI2vec: Learning MIDI embeddings for reliable prediction of symbolic music metadata

TLDR
This work proposes MIDI2vec, a new approach for representing MIDI files as vectors based on graph embedding techniques that has real-world applications in automated metadata tagging for symbolic music, for example in digital libraries for musicology, datasets for machine learning, and knowledge graph completion.

BACHDUET: A HUMAN-MACHINE DUET IMPROVISATION SYSTEM

TLDR
BachDuet is a system that enables real-time counterpoint improvisation between a human and a machine and hopes that it will serve as both an entertainment and practice tool for classical musicians to develop their improvisation skills.

References

SHOWING 1-10 OF 25 REFERENCES

A Neural Greedy Model for Voice Separation in Symbolic Music

TLDR
A corpus of popular music was manually annotated and used to train a neural network with one hidden layer that is connected to a diverse set of perceptually informed input features, which obtains over 91% F-measure, surpassing a strong baseline based on an iterative application of an envelope extraction function.

MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment

TLDR
Three models for symbolic multi-track music generation under the framework of generative adversarial networks (GANs), which differ in the underlying assumptions and accordingly the network architectures are referred to as the jamming model, the composer model and the hybrid model are proposed.

DeepBach: a Steerable Model for Bach Chorales Generation

TLDR
DeepBach, a graphical model aimed at modeling polyphonic music and specifically hymn-like pieces, is introduced, which is capable of generating highly convincing chorales in the style of Bach.

Sampling Variations of Sequences for Structured Music Generation

TLDR
This work presents an approach to generate structured sequences, based on a mechanism for sampling efficiently variations of musical sequences, that can be used to implement composition strategies that enforce arbitrary structure on a musical lead sheet generation problem.

Automatic Stylistic Composition of Bach Chorales with Deep LSTM

TLDR
An AI system based upon LSTMs that was able to compose music like Johann Sebastian Bach was built, and analysis of the trained model provided evidence of neurons specializing without prior knowledge or explicit supervision to detect common music-theoretic concepts such as tonics, chords, and cadences.

MidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation

TLDR
This work proposes a novel conditional mechanism to exploit available prior knowledge, so that the model can generate melodies either from scratch, by following a chord sequence, or by conditioning on the melody of previous bars, making it a generative adversarial network (GAN).

MySong: automatic accompaniment generation for vocal melodies

TLDR
A user with no musical experience can create a song with instrumental accompaniment just by singing into a microphone, and can experiment with different styles and chord patterns using interactions designed to be intuitive to non-musicians.

Counterpoint by Convolution

TLDR
This model is an instance of orderless NADE, which allows more direct ancestral sampling, and finds that Gibbs sampling greatly improves sample quality, which is demonstrated to be due to some conditional distributions being poorly modeled.

Comparing Voice and Stream Segmentation Algorithms

TLDR
This work proposes an independent evaluation of four voice and stream segmentation algorithms (Tem-perley, Chew and Wu, Ishigaki et al., and Rafailidis et al.) using several evaluation metrics and discusses their strengths and weaknesses.

Music21: A Toolkit for Computer-Aided Musicology and Symbolic Music Data

TLDR
This paper introduces the music21 system, demonstrating how to use it and the types of problems it is wellsuited toward advancing, and includes numerous examples of its power and flexibility.