Attentional networks for music generation

  title={Attentional networks for music generation},
  author={Gullapalli Keerti and A N Vaishnavi and Prerana Mukherjee and Aparna S Vidya and Gattineni Sai Sreenithya and Deeksha Nayab},
Realistic music generation has always remained as a challenging problem as it may lack structure or rationality. In this work, we propose a deep learning based music generation method in order to produce old style music particularly JAZZ with rehashed melodic structures utilizing a Bi-directional Long Short Term Memory (Bi-LSTM) Neural Network with Attention. Owing to the success in modelling long-term temporal dependencies in sequential data and its success in case of videos, Bi-LSTMs with… 

The Psychological Education Strategy of Music Generation and Creation by Generative Confrontation Network under Deep Learning

Both subjective and objective evaluations show that the generated music is more favored by the audience, indicating that the combination of deep learning and GAN has a great effect on music generation.

Folk melody generation based on CNN-BiGRU and Self-Attention

This paper proposes a melody generation network based on CNN-BiGRU and Self-Attention and shows that the prediction accuracy of the proposed model is improved and achieves improvement in other evaluation measures.

A Review of Intelligent Music Generation Systems

A comprehensive survey and analysis of recent intelligent music generation techniques, provide a critical discussion, explicitly identify their respective characteristics, and present them in a general table is conducted.



Bach in 2014: Music Composition with Recurrent Neural Network

It is shown that LSTM network learns the structure and characteristics of music pieces properly by demonstrating its ability to recreate music by predicting existing music using RProp outperforms Back propagation through time (BPTT).

Finding temporal structure in music: blues improvisation with LSTM recurrent networks

  • D. EckJ. Schmidhuber
  • Computer Science
    Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing
  • 2002
Long short-term memory (LSTM) has succeeded in similar domains where other RNNs have failed, such as timing and counting and the learning of context sensitive languages, and it is shown that LSTM is also a good mechanism for learning to compose music.

A First Look at Music Composition using LSTM Recurrent Neural Networks

Long Short-Term Memory is shown to be able to play the blues with good timing and proper structure as long as one is willing to listen, and once the network has found the relevant structure it does not drift from it.

Interactive Music Generation with Positional Constraints using Anticipation-RNNs

This paper introduces a novel architecture called Anticipation-RNN which possesses the assets of the RNN-based generative models while allowing to enforce user-defined positional constraints and demonstrates its efficiency on the task of generating melodies satisfying positional constraints in the style of the soprano parts of the J.S. Bach chorale harmonizations.

Generating Polyphonic Music Using Tied Parallel Networks

A neural network architecture which enables prediction and composition of polyphonic music in a manner that preserves translation-invariance of the dataset and attains high performance at a musical prediction task and successfully creates note sequences which possess measure-level musical structure.

MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment

Three models for symbolic multi-track music generation under the framework of generative adversarial networks (GANs), which differ in the underlying assumptions and accordingly the network architectures are referred to as the jamming model, the composer model and the hybrid model are proposed.

MidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation

This work proposes a novel conditional mechanism to exploit available prior knowledge, so that the model can generate melodies either from scratch, by following a chord sequence, or by conditioning on the melody of previous bars, making it a generative adversarial network (GAN).

A Study on LSTM Networks for Polyphonic Music Sequence Modelling

This paper investigates the predictive power of simple LSTM networks for polyphonic MIDI sequences, using an empirical approach, and suggests that for AMT, a musically-relevant sample rate is crucial in order to model note transitions, beyond a simple smoothing effect.

Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription

A probabilistic model based on distribution estimators conditioned on a recurrent neural network that is able to discover temporal dependencies in high-dimensional sequences that outperforms many traditional models of polyphonic music on a variety of realistic datasets is introduced.

Creating melodies with evolving recurrent neural networks

  • C.-C.J. ChenR. Miikkulainen
  • Computer Science
    IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)
  • 2001
This work observes that the model learns to generate melodies according to composition rules on tonality and rhythm with interesting variations and finds a neural network that maximizes the chance of generating good melodies.