ArtGAN: Artwork synthesis with conditional categorical GANs

  title={ArtGAN: Artwork synthesis with conditional categorical GANs},
  author={Wei Ren Tan and Chee Seng Chan and Hern{\'a}n E. Aguirre and Kiyoshi Tanaka},
  journal={2017 IEEE International Conference on Image Processing (ICIP)},
This paper proposes an extension to the Generative Adversarial Networks (GANs), namely as ArtGAN to synthetically generate more challenging and complex images such as artwork that have abstract characteristics. This is in contrast to most of the current solutions that focused on generating natural images such as room interiors, birds, flowers and faces. The key innovation of our work is to allow back-propagation of the loss function w.r.t. the labels (randomly assigned to each generated images… 

Figures and Tables from this paper

Learning a Generative Adversarial Network for High Resolution Artwork Synthesis

A series of new approaches to improve Generative Adversarial Network (GAN) for conditional image synthesis are proposed and the proposed model is named as ArtGAN, which is able to generate plausible-looking images on Oxford-102 and CUB-200 and able to draw realistic artworks based on style, artist, and genre.

Improved ArtGAN for Conditional Synthesis of Natural Image and Artwork

A series of new approaches to improve generative adversarial network (GAN) for conditional image synthesis and the proposed model is named as “ArtGAN”, which is able to generate plausible-looking images on Oxford-102 and CUB-200, as well as able to draw realistic artworks based on style, artist, and genre.

Image Synthesis with Aesthetics-Aware Generative Adversarial Network

A novel GAN model is proposed that is both aware of visual aesthetics and content semantics and adds two types of loss functions, which try to maximize the visual aesthetics of an image and minimizes the similarity between generated images and real images in terms of high-level visual contents.

Continuation of Famous Art with AI: A Conditional Adversarial Network Inpainting Approach

The experiments exploring landscapes, Ukiyo-e, and abstract art showed that, in many cases, features within the image were continued, and included the generation of new mountains and trees, as well as characters which resembled written text.

Generate Novel Image Styles using Weighted Hybrid Generative Adversarial Nets

Inspired by creating a new calligraphic style, a novel GAN model is proposed that supports creatively generate data domain, such as context, style and so on, and is called WHGAN.

Systematic Analysis of Image Generation using GANs

This study explores and presents a taxonomy of GANs and their use in various image to image synthesis and text to images synthesis applications, as well as a variety of different niche frameworks.

edge2art: Edges to Artworks Translation with Conditional Generative Adversarial Networks

This paper presents an application of the pix2pix model [3], which presents a solution to the image to image translation problem by using cGANs. The main objective of our research consists in the

End-to-End Chinese Landscape Painting Creation Using Generative Adversarial Networks

  • Alice Xue
  • Art
    2021 IEEE Winter Conference on Applications of Computer Vision (WACV)
  • 2021
The proposed Sketch-And-Paint GAN (SAPGAN), the first model which generates Chinese landscape paintings from end to end, without conditional input, lays a groundwork for truly machine-original art generation.

Realistic River Image Synthesis Using Deep Generative Adversarial Networks

A generative adversarial network (GAN) model capable of generating high-resolution and realistic river images that can be used to support modeling and analysis in surface water estimation, river meandering, wetland loss, and other hydrological research studies is explored.

Shape-conditioned Image Generation by Learning Latent Appearance Representation from Unpaired Data

SCGAN is presented, an architecture to generate images with a desired shape specified by an input normal map by explicitly modeling the image appearance via a latent appearance vector and shows the effectiveness of the method through both qualitative and quantitative evaluation on training data generation tasks.



Neural Photo Editing with Introspective Adversarial Networks

The Neural Photo Editor is presented, an interface that leverages the power of generative neural networks to make large, semantically coherent changes to existing images, and the Introspective Adversarial Network is introduced, a novel hybridization of the VAE and GAN.

Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks

A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.

Autoencoding beyond pixels using a learned similarity metric

An autoencoder that leverages learned representations to better measure similarities in data space is presented and it is shown that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.

Conditional Generative Adversarial Nets

The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels.

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.

Ceci n'est pas une pipe: A deep convolutional network for fine-art paintings classification

This paper trains an end-to-end deep convolution model to investigate the capability of the deep model in fine-art painting classification problem and employs the recently publicly available large-scale “Wikiart paintings” dataset that consists of more than 80,000 paintings.

Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks

In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information

A note on the evaluation of generative models

This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models and shows that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.