Corpus ID: 52220918

Part-invariant Model for Music Generation and Harmonization

@inproceedings{Yan2018PartinvariantMF,
  title={Part-invariant Model for Music Generation and Harmonization},
  author={Yujia Yan and Ethan Lustig and Joseph VanderStel and Zhiyao Duan},
  booktitle={ISMIR},
  year={2018}
}
Automatic music generation has been gaining more attention in recent years. Existing approaches, however, are mostly ad hoc to specific rhythmic structures or instrumentation layouts, and lack music-theoretic rigor in their evaluations. In this paper, we present a neural language (music) model that tries to model symbolic multi-part music. Our model is part-invariant, i.e., it can process/generate any part (voice) of a music score consisting of an arbitrary number of parts, using a single… Expand
A Comprehensive Survey on Deep Music Generation: Multi-level Representations, Algorithms, Evaluations, and Future Directions
TLDR
This paper attempts to provide an overview of various composition tasks under different music generation levels, covering most of the currently popular music generation tasks using deep learning. Expand
LakhNES: Improving Multi-instrumental Music Generation with Cross-domain Pre-training
TLDR
To improve the performance of the Transformer architecture, this work proposes a pre-training technique to leverage the information in a large collection of heterogeneous music, namely the Lakh MIDI dataset, and finds that this transfer learning procedure improves both quantitative and qualitative performance for the primary task. Expand
RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement Learning
TLDR
A deep reinforcement learning algorithm for online accompaniment generation, with potential for real-time interactive human-machine duet improvisation, based on a well-functioning reward model that considers the compatibility of the machine-generated note with both the machine's context and the human-generated context. Expand
BachDuet: A Deep Learning System for Human-Machine Counterpoint Improvisation
During the Baroque period, improvisation was a key element of music performance and education. Great musicians, such as J.S. Bach, were better known as improvisers than composers. Today, however,Expand
MG-VAE: Deep Chinese Folk Songs Generation with Specific Regional Style
TLDR
MG-VAE is proposed, a music generative model based on VAE (Variational Auto-Encoder) that is capable of capturing specific music style and generating novel tunes for Chinese folk songs in a manipulatable way and is able to create novel folk songs with controllable regional styles. Expand
BACHDUET: A HUMAN-MACHINE DUET IMPROVISATION SYSTEM
Back in the days of what today we refer to as Baroque period, improvisation was a key element of music performance and education. Great musicians such as J. S. Bach, were better known as improvisersExpand
When Counterpoint Meets Chinese Folk Melodies
TLDR
This paper proposes a reinforcement learning-based system, named FolkDuet, towards the online countermelody generation for Chinese folk melodies, with no existing data of Chinese folk duets, and employs two reward models based on out-of-domain data. Expand

References

SHOWING 1-10 OF 25 REFERENCES
A Neural Greedy Model for Voice Separation in Symbolic Music
TLDR
A corpus of popular music was manually annotated and used to train a neural network with one hidden layer that is connected to a diverse set of perceptually informed input features, which obtains over 91% F-measure, surpassing a strong baseline based on an iterative application of an envelope extraction function. Expand
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment
TLDR
Three models for symbolic multi-track music generation under the framework of generative adversarial networks (GANs) are proposed, referred to as the jamming model, the composer model and the hybrid model, which can generate coherent music of four bars right from scratch. Expand
DeepBach: a Steerable Model for Bach Chorales Generation
TLDR
DeepBach, a graphical model aimed at modeling polyphonic music and specifically hymn-like pieces, is introduced, which is capable of generating highly convincing chorales in the style of Bach. Expand
Sampling Variations of Sequences for Structured Music Generation
TLDR
This work presents an approach to generate structured sequences, based on a mechanism for sampling efficiently variations of musical sequences, that can be used to implement composition strategies that enforce arbitrary structure on a musical lead sheet generation problem. Expand
Automatic Stylistic Composition of Bach Chorales with Deep LSTM
TLDR
An AI system based upon LSTMs that was able to compose music like Johann Sebastian Bach was built, and analysis of the trained model provided evidence of neurons specializing without prior knowledge or explicit supervision to detect common music-theoretic concepts such as tonics, chords, and cadences. Expand
MidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation
TLDR
This work proposes a novel conditional mechanism to exploit available prior knowledge, so that the model can generate melodies either from scratch, by following a chord sequence, or by conditioning on the melody of previous bars, making it a generative adversarial network (GAN). Expand
MySong: automatic accompaniment generation for vocal melodies
TLDR
A user with no musical experience can create a song with instrumental accompaniment just by singing into a microphone, and can experiment with different styles and chord patterns using interactions designed to be intuitive to non-musicians. Expand
Counterpoint by Convolution
TLDR
This model is an instance of orderless NADE, which allows more direct ancestral sampling, and finds that Gibbs sampling greatly improves sample quality, which is demonstrated to be due to some conditional distributions being poorly modeled. Expand
Comparing Voice and Stream Segmentation Algorithms
TLDR
This work proposes an independent evaluation of four voice and stream segmentation algorithms (Tem-perley, Chew and Wu, Ishigaki et al., and Rafailidis et al.) using several evaluation metrics and discusses their strengths and weaknesses. Expand
Music21: A Toolkit for Computer-Aided Musicology and Symbolic Music Data
TLDR
This paper introduces the music21 system, demonstrating how to use it and the types of problems it is wellsuited toward advancing, and includes numerous examples of its power and flexibility. Expand
...
1
2
3
...