• Corpus ID: 234777751

Copyright in Generative Deep Learning

  title={Copyright in Generative Deep Learning},
  author={Giorgio Franceschelli and Mirco Musolesi},
Machine-generated artworks are now part of the contemporary art scene: they are attracting significant investments and they are presented in exhibitions together with those created by human artists. These artworks are mainly based on generative deep learning techniques, which have seen a formidable development and remarkable refinement in the very recent years. Given the inherent characteristics of these techniques, a series of novel legal problems arise. In this article, we consider a set of… 
WhyGen: Explaining ML-powered Code Generation by Referring to Training Examples
This work introduces a tool, named WhyGen, to explain the generated code by referring to training examples, and introduces a data structure, named inference fingerprint, to represent the decision process of the model when generating a prediction.


Deep Creations: Intellectual Property and the Automata
The present work will address the conditions of protection of creations generated by deep neural networks under the main copyright regimes.
CAN: Creative Adversarial Networks, Generating "Art" by Learning About Styles and Deviating from Style Norms
The results show that human subjects could not distinguish artgenerated by the proposed system from art generated by contemporary artists and shown in top art fairs.
Self-Attention Generative Adversarial Networks
The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset.
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment
Three models for symbolic multi-track music generation under the framework of generative adversarial networks (GANs) are proposed, referred to as the jamming model, the composer model and the hybrid model, which can generate coherent music of four bars right from scratch.
Large Scale GAN Training for High Fidelity Natural Image Synthesis
It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.
A Style-Based Generator Architecture for Generative Adversarial Networks
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation.
Image Transformer
This work generalizes a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood, and significantly increases the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks.
Towards the High-quality Anime Characters Generation with Generative Adversarial Networks
This paper proposes a model that produces anime faces at high quality with promising rate of success with three-fold contributions: A clean dataset from Getchu, a suitable DRAGAN[10]-based SRResNet[11]like GAN model, and the general approach to training conditional model from image with estimated tags as conditions.
SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update.
Deep learning for procedural content generation
This article surveys the various deep learning methods that have been applied to generate game content directly or indirectly, discusses deeplearning methods that could be used for content generation purposes but are rarely used today, and envisages some limitations and potential future directions of deep learning for procedural content generation.