ArtGAN: Artwork synthesis with conditional categorical GANs

@article{Tan2017ArtGANAS,
  title={ArtGAN: Artwork synthesis with conditional categorical GANs},
  author={Wei Ren Tan and Chee Seng Chan and Hern{\'a}n E. Aguirre and Kiyoshi Tanaka},
  journal={2017 IEEE International Conference on Image Processing (ICIP)},
  year={2017},
  pages={3760-3764}
}
This paper proposes an extension to the Generative Adversarial Networks (GANs), namely as ArtGAN to synthetically generate more challenging and complex images such as artwork that have abstract characteristics. This is in contrast to most of the current solutions that focused on generating natural images such as room interiors, birds, flowers and faces. The key innovation of our work is to allow back-propagation of the loss function w.r.t. the labels (randomly assigned to each generated images… Expand
Learning a Generative Adversarial Network for High Resolution Artwork Synthesis
TLDR
A series of new approaches to improve Generative Adversarial Network (GAN) for conditional image synthesis are proposed and the proposed model is named as ArtGAN, which is able to generate plausible-looking images on Oxford-102 and CUB-200 and able to draw realistic artworks based on style, artist, and genre. Expand
Improved ArtGAN for Conditional Synthesis of Natural Image and Artwork
TLDR
A series of new approaches to improve generative adversarial network (GAN) for conditional image synthesis and the proposed model is named as “ArtGAN”, which is able to generate plausible-looking images on Oxford-102 and CUB-200, as well as able to draw realistic artworks based on style, artist, and genre. Expand
Image Synthesis with Aesthetics-Aware Generative Adversarial Network
TLDR
A novel GAN model is proposed that is both aware of visual aesthetics and content semantics and adds two types of loss functions, which try to maximize the visual aesthetics of an image and minimizes the similarity between generated images and real images in terms of high-level visual contents. Expand
Continuation of Famous Art with AI: A Conditional Adversarial Network Inpainting Approach
TLDR
The experiments exploring landscapes, Ukiyo-e, and abstract art showed that, in many cases, features within the image were continued, and included the generation of new mountains and trees, as well as characters which resembled written text. Expand
Generate Novel Image Styles using Weighted Hybrid Generative Adversarial Nets
TLDR
Inspired by creating a new calligraphic style, a novel GAN model is proposed that supports creatively generate data domain, such as context, style and so on, and is called WHGAN. Expand
Systematic Analysis of Image Generation using GANs
TLDR
This study explores and presents a taxonomy of GANs and their use in various image to image synthesis and text to images synthesis applications, as well as a variety of different niche frameworks. Expand
edge2art: Edges to Artworks Translation with Conditional Generative Adversarial Networks
This paper presents an application of the pix2pix model [3], which presents a solution to the image to image translation problem by using cGANs. The main objective of our research consists in theExpand
End-to-End Chinese Landscape Painting Creation Using Generative Adversarial Networks
  • Alice Xue
  • Computer Science, Art
  • 2021 IEEE Winter Conference on Applications of Computer Vision (WACV)
  • 2021
TLDR
The proposed Sketch-And-Paint GAN (SAPGAN), the first model which generates Chinese landscape paintings from end to end, without conditional input, lays a groundwork for truly machine-original art generation. Expand
Shape-conditioned Image Generation by Learning Latent Appearance Representation from Unpaired Data
TLDR
SCGAN is presented, an architecture to generate images with a desired shape specified by an input normal map by explicitly modeling the image appearance via a latent appearance vector and shows the effectiveness of the method through both qualitative and quantitative evaluation on training data generation tasks. Expand
Adversarially Regularized U-Net-based GANs for Facial Attribute Modification and Generation
TLDR
The results show that learning two tasks jointly can lead to performance improvement compared with learning them individually, and a joint training technique for the ARU-GAN, which enables the facial attribute modification and generation tasks to learn together during training. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 22 REFERENCES
Neural Photo Editing with Introspective Adversarial Networks
TLDR
The Neural Photo Editor is presented, an interface that leverages the power of generative neural networks to make large, semantically coherent changes to existing images, and the Introspective Adversarial Network is introduced, a novel hybridization of the VAE and GAN. Expand
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes. Expand
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. Expand
Autoencoding beyond pixels using a learned similarity metric
TLDR
An autoencoder that leverages learned representations to better measure similarities in data space is presented and it is shown that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic. Expand
Conditional Generative Adversarial Nets
TLDR
The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning. Expand
Ceci n'est pas une pipe: A deep convolutional network for fine-art paintings classification
TLDR
This paper trains an end-to-end deep convolution model to investigate the capability of the deep model in fine-art painting classification problem and employs the recently publicly available large-scale “Wikiart paintings” dataset that consists of more than 80,000 paintings. Expand
Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks
In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual informationExpand
A note on the evaluation of generative models
TLDR
This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models and shows that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional. Expand
...
1
2
3
...