• Corpus ID: 2002865

MidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation

@article{Yang2017MidiNetAC,
  title={MidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation},
  author={Li-Chia Yang and Szu-Yu Chou and Yi-Hsuan Yang},
  journal={ArXiv},
  year={2017},
  volume={abs/1703.10847}
}
Most existing neural network models for music generation use recurrent neural networks. [] Key Method In addition to the generator, we use a discriminator to learn the distributions of melodies, making it a generative adversarial network (GAN).

Figures and Tables from this paper

Music Generation Using Generative Adversarial Networks

By means of an user study it is concluded that the music segments generated by the implemented system are not noise, and are actually musically pleasing.

Polyphonic Music Generation with Sequence Generative Adversarial Networks

The proposed method condenses duration, octaves, and keys of both melodies and chords into a single word vector representation, and recurrent neural networks learn to predict distributions of sequences from the embedded musical word space.

MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment

Three models for symbolic multi-track music generation under the framework of generative adversarial networks (GANs), which differ in the underlying assumptions and accordingly the network architectures are referred to as the jamming model, the composer model and the hybrid model are proposed.

GENERATION WITH SEQUENCE GENERATIVE ADVERSARIAL NETWORKS

The proposed method condenses duration, octaves, and keys of both melodies and chords into a single word vector representation, and recurrent neural networks learn to predict distributions of sequences from the embedded musical word space.

MuseGAN: Symbolic-domain Music Generation and Accompaniment with Multi-track Sequential Generative Adversarial Networks

This paper proposes and study three generative adversarial networks for symbolic-domain multi-track music generation, using a data set of 127,731 MIDI bars of pop/rock music, and shows that their models can learn from the noisy MIDI files and generate coherent music of four bars right from scratch.

INCO-GAN: Variable-Length Music Generation Method Based on Inception Model-Based Conditional GAN

A conditional generative adversarial network approach using an inception model (INCO-GAN), which enables the generation of complete variable-length music automatically and obtains richer features, which improves the quality of the generated music.

Multi-category MIDI music generation based on LSTM Generative adversarial network

This paper proposes a music score generation model which employs multi-layer RNNs and GAN scheme, and shows that it is a feasible network structure which can generate multi-category music with good hearing experience.

Music Generation with Deep Neural Networks Using Flattened Multi-Channel Skip-3 Softmax and Cross-Entropy

An investigation of convolutional neural networks as a means of generating human-plausible, goal-oriented music, specifically pop melodies shows some evidence of rhythmic and harmonic patterns, but lack melodic elements.

A Music Generation Model Based on Generative Adversarial Networks with Bayesian Optimization

A novel melody generation framework is proposed to create motivation for composers, which contains a generator made by bidirectional long short-term memory (Bi-LSTM) and a discriminator made by long short term memory (L STM).

Generating Music Algorithm with Deep Convolutional Generative Adversarial Networks

An advanced arithmetic for generating music using Generative Adversarial Networks (GAN), which adopts a full-channel lateral deep convolutional network structure according to the music data characteristics in this paper, generate music more in line with human hearing and aesthetics.
...

References

SHOWING 1-10 OF 42 REFERENCES

A Unit Selection Methodology for Music Generation Using Deep Neural Networks

This work describes a generative model that combines a deep structured semantic model (DSSM) with an LSTM to predict the next unit, where units consist of four, two, and one measures of music.

Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders

A powerful new WaveNet-style autoencoder model is detailed that conditions an autoregressive decoder on temporal codes learned from the raw audio waveform, and NSynth, a large-scale and high-quality dataset of musical notes that is an order of magnitude larger than comparable public datasets is introduced.

Composing Music with Grammar Argumented Neural Networks and Note-Level Encoding

  • Zheng SunJiaqi Liu Xiao Zhang
  • Computer Science
    2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)
  • 2018
A novel method for music composition that combines the LSTM with Grammars motivated by music theory so that the machine can be trained to generate music inheriting the naturalness of human-composed pieces from the original dataset while adhering to the rules of music theory.

Finding temporal structure in music: blues improvisation with LSTM recurrent networks

  • D. EckJ. Schmidhuber
  • Computer Science
    Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing
  • 2002
Long short-term memory (LSTM) has succeeded in similar domains where other RNNs have failed, such as timing and counting and the learning of context sensitive languages, and it is shown that LSTM is also a good mechanism for learning to compose music.

Conditional generative adversarial nets for convolutional face generation

An extension of generative adversarial networks (GANs) to a conditional setting is applied, and the likelihood of real-world faces under the generative model is evaluated, and how to deterministically control face attributes is examined.

Generative Adversarial Text to Image Synthesis

A novel deep architecture and GAN formulation is developed to effectively bridge advances in text and image modeling, translating visual concepts from characters to pixels.

Song From PI: A Musically Plausible Network for Pop Music Generation

We present a novel framework for generating pop music. Our model is a hierarchical Recurrent Neural Network, where the layers and the structure of the hierarchy encode our prior knowledge about how

DeepBach: a Steerable Model for Bach Chorales Generation

DeepBach, a graphical model aimed at modeling polyphonic music and specifically hymn-like pieces, is introduced, which is capable of generating highly convincing chorales in the style of Bach.

Conditional Image Generation with PixelCNN Decoders

The gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a