• Corpus ID: 6977341

Automatic Stylistic Composition of Bach Chorales with Deep LSTM

@inproceedings{Liang2017AutomaticSC,
  title={Automatic Stylistic Composition of Bach Chorales with Deep LSTM},
  author={Feynman T. Liang and Mark Gotham and Matthew Johnson and Jamie Shotton},
  booktitle={International Society for Music Information Retrieval Conference},
  year={2017}
}
This paper presents “BachBot”: an end-to-end automatic composition system for composing and completing music in the style of Bach’s chorales using a deep long short-term memory (LSTM) generative model. [] Key Result Among the results, the proportion of responses correctly differentiating BachBot from Bach was only 1% better than random guessing.

Figures and Tables from this paper

Differential Music: Automated Music Generation Using LSTM Networks with Representation Based on Melodic and Harmonic Intervals

The result of this preparation step which can be considered as an encoding of the original data is commonly referred to as “representation” in the machine-learning terminology.

Next Bar Predictor: An Architecture in Automated Music Generation

The Next Bar Predictor is a generative model that creates melody one bar at a time using the previous bar as basis to generate aesthetically pleasing melodies, and based on the evaluation by human listeners, the melodies generated by these models are more realistic and pleasing than those of the Midinet.

Bach or Mock? A Grading Function for Chorales in the Style of J.S. Bach

This paper introduces a grading function that evaluates four-part chorales in the style of J.S. Bach along important musical features, and shows that the function is both interpretable and outperforms human experts at discriminating Bach chorale from model-generated ones.

Continuous Melody Generation via Disentangled Short-Term Representations and Structural Conditions

  • K. ChenGus XiaS. Dubnov
  • Computer Science
    2020 IEEE 14th International Conference on Semantic Computing (ICSC)
  • 2020
A model for composing melodies given a user specified symbolic scenario combined with a previous music context that is capable of generating long melodies by regarding 8-beat note sequences as basic units, and shares consistent rhythm pattern structure with another specific song.

Part-invariant Model for Music Generation and Harmonization

A neural language (music) model that tries to model symbolic multi-part music that can process/generate any part (voice) of a music score consisting of an arbitrary number of parts, using a single trained model.

Style-Conditioned Music Generation

This work proposed a new formulation to the VAE that allows users to condition on the style of the generated music, and shows that the proposed model can generate better music samples of each style than a baseline model.

Learning to Generate Music with BachProp

BachProp, an algorithmic composer that can generate music scores in many styles given sufficient training data is presented and a novel representation of music is proposed and a deep network is trained to predict the note transition probabilities of a given music corpus.

BacHMMachine: An Interpretable and Scalable Model for Algorithmic Harmonization for Four-part Baroque Chorales

B BacHMMachine is proposed, which employs a “theory-driven” framework guided by music composition principles, along with a " data-driven" model for learning compositional features within this framework, providing a probabilistic framework for learning key modulations and chordal progressions from a given melodic line.

Music Generation Using Deep Learning Techniques

RBM and Recurrent Neural Network Restricted Boltzmann Machine are used for music generation by training it on a collection of Musical Instrument Digital Interface (MIDI) files.

BassNet: A Variational Gated Autoencoder for Conditional Generation of Bass Guitar Tracks with Learned Interactive Control

BassNet, a deep learning model for generating bass guitar tracks based on musical source material is presented, which is trained to learn a temporally stable two-dimensional latent space variable that offers interactive user control.
...

References

SHOWING 1-10 OF 43 REFERENCES

Finding temporal structure in music: blues improvisation with LSTM recurrent networks

  • D. EckJ. Schmidhuber
  • Computer Science
    Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing
  • 2002
Long short-term memory (LSTM) has succeeded in similar domains where other RNNs have failed, such as timing and counting and the learning of context sensitive languages, and it is shown that LSTM is also a good mechanism for learning to compose music.

DeepBach: a Steerable Model for Bach Chorales Generation

DeepBach, a graphical model aimed at modeling polyphonic music and specifically hymn-like pieces, is introduced, which is capable of generating highly convincing chorales in the style of Bach.

Neural Network Music Composition by Prediction: Exploring the Benefits of Psychoacoustic Constraints and Multi-scale Processing

  • M. Mozer
  • Computer Science
    Connect. Sci.
  • 1994
An extension of this transition-table approach is described, using a recurrent autopredictive connectionist network called CONCERT, which is trained on a set of pieces with the aim of extracting stylistic regularities and incorporation of psychologically grounded representations of pitch, duration and harmonic structure.

Jazz Melody Generation from Recurrent Network Learning of Several Human Melodies

This work is experimenting with using two LSTM modules to cooperatively learn several human melodies, based on the songs’ harmonic structures, and the feedback inherent in the network, and shows that these networks can learn to reproduce four human melodies.

Developing and evaluating computational models of musical style

Two computational models of stylistic composition are described and evaluated, called Racchman-Oct2010 (random constrained chain of Markovian nodes, October 2010) and Racch maninof-Oct 2010 (Racchman with inheritance of form), which embeds this model in an analogy-based design system.

MELONET I: Neural Nets for Inventing Baroque-Style Chorale Variations

MELONET I is a multi-scale neural network system producing baroque-style melodic variations, able to learn and reproduce high-order structure like harmonic, motif and phrase structure in melodic sequences.

Polyphonic Music Generation by Modeling Temporal Dependencies Using a RNN-DBN

The technique, RNN-DBN, is an amalgamation of the memory state of the RNN that allows it to provide temporal information and a multi-layer DBN that helps in high level representation of the data, making it ideal for sequence generation.

Recurrent Neural Networks for Music Computation

Findings are presented that show that a long short-term memory recurrent network, with new representations that include music knowledge, can learn musical tasks, and can learn to reproduce long songs.

An Expert System for Harmonizing Four-Part Chorales

Quite a few trends in algorithmic composition today are based on a streamlined formalism, for example, in the form of random generation of note attributes using elegant statistical distributions, terse and powerful formal grammars, or generalizations of serial composition procedures.

Harmonizing Music the Boltzmann Way

The authors' experiments demonstrate that using an EBM, ‘good’ harmonies can be non-deterministically synthesized along with a relative measure of their quality.