• Corpus ID: 238634415

Music Sentiment Transfer

@article{Sigel2021MusicST,
  title={Music Sentiment Transfer},
  author={Miles Sigel and Michael X. Zhou and Jiebo Luo},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.05765}
}
Music sentiment transfer is a completely novel task. Sentiment transfer is a natural evolution of the heavily-studied style transfer task, as sentiment transfer is rooted in applying the sentiment of a source to be the new sentiment for a target piece of media; yet compared to style transfer, sentiment transfer has been only scantily studied on images. Music sentiment transfer attempts to apply the high level objective of sentiment transfer to the domain of music. We propose CycleGAN to bridge… 

Figures from this paper

References

SHOWING 1-10 OF 19 REFERENCES

Image Sentiment Transfer

An effective and flexible framework that performs image sentiment transfer at the object level is proposed, and an effective content disentanglement loss cooperating with a content alignment step is applied to better disentangle the residual sentiment-related information of the input image.

Global Image Sentiment Transfer

Both qualitative and quantitative evaluations demonstrate that the proposed sentiment transfer framework outperforms existing artistic and photo-realistic style transfer algorithms in producing satisfactory sentiment transfer results with fine and exact details.

Learning to Generate Music With Sentiment

A generative Deep Learning model that can be directed to compose music with a given sentiment that is able to obtain good prediction accuracy and can be used for sentiment analysis of symbolic music.

Symbolic Music Genre Transfer with CycleGAN

This paper applies the first application of GANs to symbolic music domain transfer and adds additional discriminators that cause the generators to keep the structure of the original music mostly intact, while still achieving strong genre transfer.

MelGAN-VC: Voice Conversion and Audio Style Transfer on arbitrarily long samples using Spectrograms

MelGAN-VC, a voice conversion method that relies on non-parallel speech data and is able to convert audio signals of arbitrary length from a source voice to a target voice, is proposed and applied to perform music style transfer.

Real-Time Neural Style Transfer for Videos

  • Haozhi HuangHao Wang W. Liu
  • Computer Science
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
This work proposes a hybrid loss to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames to calculate the temporal loss during the training stage.

Generating Music using an LSTM Network

The probabilistic model presented is a Bi-axial LSTM trained with a kernel reminiscent of a convolutional kernel that performs well in composing polyphonic music.

Music Generation using Deep Generative Modelling

This project aims to propose a system that combines the random subsampling approach of GANs with a recurrent autoregressive model that will help to model coherent musical structures effectively on both, global and local levels.

Many-To-Many Voice Conversion Using Conditional Cycle-Consistent Adversarial Networks

The proposed method can perform many-to-many voice conversion among multiple speakers using a single generative adversarial network (GAN) and reduces the computational and spatial cost significantly without compromising the sound quality of the converted voice.

Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization

This paper presents a simple yet effective approach that for the first time enables arbitrary style transfer in real-time, comparable to the fastest existing approach, without the restriction to a pre-defined set of styles.