• Corpus ID: 212497778

International Journal of Scientific Research in Computer Science, Engineering and Information Technology

  title={International Journal of Scientific Research in Computer Science, Engineering and Information Technology},
  author={Mohit Malhotra and Raghav Mittal and Madhur Jain and Bhagwan Parshuram},
Advancement in deep neural networks have made it possible to compose music that mimics music composition by humans. The capacity of deep learning architectures in learning musical style from arbitrary musical corpora have been explored in this paper. The paper proposes a method for generated from the estimated distribution. Musical chords have been extracted for various instruments to train a sequential model to generate the polyphonic music on some selected instruments. We demonstrate a simple… 

Figures from this paper


Recurrent Neural Networks for Music Computation
Findings are presented that show that a long short-term memory recurrent network, with new representations that include music knowledge, can learn musical tasks, and can learn to reproduce long songs.
Modeling Temporal Tonal Relations in Polyphonic Music Through Deep Networks With a Novel Image-Based Representation
Experimental results show that the tonnetz representation produces musical sequences that are more tonally stable and contain more repeated patterns than sequences generated by pianoroll-based models, a finding that is directly useful for tackling current challenges in music and AI such as smart music generation.
A novel approach for automated music composition using memetic algorithms
This research offers a novel approach to produce quality musical compositions using a memetic algorithm and conforms with MIDI protocol standards; the industry standard for electronic musical instruments.
Deep Learning Techniques for Music Generation - A Survey
This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature.
Song From PI: A Musically Plausible Network for Pop Music Generation
We present a novel framework for generating pop music. Our model is a hierarchical Recurrent Neural Network, where the layers and the structure of the hierarchy encode our prior knowledge about how
Music composition by interaction between human and computer
A music composition system that composes music by the interaction between human and a computer is constructed and it is found that the users’ evaluation values become high over the progress of generations.
A First Look at Music Composition using LSTM Recurrent Neural Networks
Long Short-Term Memory is shown to be able to play the blues with good timing and proper structure as long as one is willing to listen, and once the network has found the relevant structure it does not drift from it.
Bach in 2014: Music Composition with Recurrent Neural Network
It is shown that LSTM network learns the structure and characteristics of music pieces properly by demonstrating its ability to recreate music by predicting existing music using RProp outperforms Back propagation through time (BPTT).
Text-based LSTM networks for Automatic Music Composition
The proposed network is designed to learn relationships within text documents that represent chord progressions and drum tracks in two cases, and word-RNNs and character-based RNNs show good results for both cases.
Music Composition Based on Linguistic Approach
This work describes music as a language composed of sequences of symbols that form melodies, with lexical symbols being sounds and silences with their duration in time, and determines functions to describe the probability distribution of these sequences of musical notes.