• Corpus ID: 4509737

Counterpoint by Convolution

@inproceedings{Huang2017CounterpointBC,
  title={Counterpoint by Convolution},
  author={Cheng-Zhi Anna Huang and Tim Cooijmans and Adam Roberts and Aaron C. Courville and Douglas Eck},
  booktitle={ISMIR},
  year={2017}
}
Machine learning models of music typically break up the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end. On the contrary, human composers write music in a nonlinear fashion, scribbling motifs here and there, often revisiting choices previously made. In order to better approximate this process, we train a convolutional neural network to complete partial musical scores, and explore the use of blocked Gibbs sampling as an analogue… 
The Effect of Explicit Structure Encoding of Deep Neural Networks for Symbolic Music Generation
TLDR
This study attempts to solve the melody generation problem constrained by the given chord progression, and explores the effect of explicit architectural encoding of musical structure via comparing two sequential generative models: LSTM and WaveNet.
ENCODING MUSICAL STYLE
We consider the problem of learning high-level controls over the global structure of sequence generation, particularly in the context of symbolic music generation with complex language models. In
Transferring the Style of Homophonic Music Using Recurrent Neural Networks and Autoregressive Model
TLDR
This paper discusses the style transfer of homophonic music, composed of a predominant melody part and an accompaniment part, where the latter is modified through Gibbs sampling on a generative model combining recurrent neural networks and autoregressive models.
Learning to Groove with Inverse Sequence Transformations
TLDR
Though Seq2Seq models usually require painstakingly aligned corpora, it is shown that it is possible to adapt an approach from the Generative Adversarial Network (GAN) literature to sequences, creating large volumes of paired data by performing simple transformations and training generative models to plausibly invert these transformations.
The Effect of Explicit Structure Encoding of Deep Neural Networks for Symbolic Music Generation
  • Ke Chen, Weilin Zhang, S. Dubnov, Gus Xia, Wei Li
  • Computer Science, Engineering
    2019 International Workshop on Multilayer Music Representation and Processing (MMRP)
  • 2019
TLDR
This study attempts to solve the melody generation problem constrained by the given chord progression, and explores the effect of explicit architectural encoding of musical structure via comparing two sequential generative models: LSTM and WaveNet.
Approachable Music Composition with Machine Learning at Scale
TLDR
The first AI-powered Google Doodle, the Bach Doodle, where users can create their own melody and have it harmonized by a machine learning model Coconet in the style of Bach is designed, and a simplified sheet-music based interface is designed.
MuML: Musical Meta-Learning
In the following work, we investigate the performance of meta-learning approaches on predicting sequences of music. Our work continues prior research in both artificial music generation and
Melody Generation for Pop Music via Word Representation of Musical Properties
TLDR
This paper proposes to represent each note and its properties as a unique ‘word,’ thus lessening the prospect of misalignments between the properties, as well as reducing the complexity of learning.
Music Transformer
TLDR
It is demonstrated that a Transformer with the modified relative attention mechanism can generate minute-long compositions with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies.
2019 Formatting Instructions for Authors Using LaTeX
With recent breakthroughs in artificial neural networks, deep generative models have become one of the leading techniques for computational creativity. Despite very promising progress on image and
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 42 REFERENCES
DeepBach: a Steerable Model for Bach Chorales Generation
TLDR
DeepBach, a graphical model aimed at modeling polyphonic music and specifically hymn-like pieces, is introduced, which is capable of generating highly convincing chorales in the style of Bach.
Style Imitation and Chord Invention in Polyphonic Music with Exponential Families
TLDR
A statistical model of polyphonic music, based on the maximum entropy principle, is proposed, able to learn and reproduce pairwise statistics between neighboring note events in a given corpus and able to invent new chords and to harmonize unknown melodies.
Imposing higher-level Structure in Polyphonic Music Generation using Convolutional Restricted Boltzmann Machines and Constraints
TLDR
A Convolutional Restricted Boltzmann Machine as a generative model is combined with gradient descent constraint optimisation to provide further control over the generation process, and it is possible to control the higher-level self-similarity structure, the meter, and the tonal properties of the resulting musical piece, while preserving its local musical coherence.
WaveNet: A Generative Model for Raw Audio
TLDR
WaveNet, a deep neural network for generating raw audio waveforms, is introduced; it is shown that it can be efficiently trained on data with tens of thousands of samples per second of audio, and can be employed as a discriminative model, returning promising results for phoneme recognition.
Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription
TLDR
A probabilistic model based on distribution estimators conditioned on a recurrent neural network that is able to discover temporal dependencies in high-dimensional sequences that outperforms many traditional models of polyphonic music on a variety of realistic datasets is introduced.
Polyphonic Music Generation by Modeling Temporal Dependencies Using a RNN-DBN
TLDR
The technique, RNN-DBN, is an amalgamation of the memory state of the RNN that allows it to provide temporal information and a multi-layer DBN that helps in high level representation of the data, making it ideal for sequence generation.
Professor Forcing: A New Algorithm for Training Recurrent Networks
TLDR
The Professor Forcing algorithm, which uses adversarial domain adaptation to encourage the dynamics of the recurrent network to be the same when training the network and when sampling from the network over multiple time steps, is introduced.
Music Generation from Statistical Models
TLDR
This paper presents several methods for sampling from an analytic statistical model, and proposes a new approach that maintains the intra opus pattern repetition within an extant piece.
A note on the evaluation of generative models
TLDR
This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models and shows that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional.
How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary?
TLDR
This paper presents a critique of scheduled sampling, a state-of-the-art training method that contributed to the winning entry to the MSCOCO image captioning benchmark in 2015, and presents the first theoretical analysis that explains why adversarial training tends to produce samples with higher perceived quality.
...
1
2
3
4
5
...