A Neural Algorithm of Artistic Style

  title={A Neural Algorithm of Artistic Style},
  author={Leon A. Gatys and Alexander S. Ecker and Matthias Bethge},
In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. [] Key Method The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a…

Figures from this paper

Generating Stylistic Images by Extending Neural Style Transfer Method
The concept of using a convolutional neural network (ConvNet or CNN) to individually separate and recombine the style and content of arbitrary images to generate perceptually striking “art” is introduced.
Deep Convolutional Nets Learning Classification for Artistic Style Transfer
This paper uses Visual Geometry Group (VGG16) neural network to replicate this dormant task performed by humans, and carries out implementations in feature space representing the higher levels of the content image.
Fast Patch-based Style Transfer of Arbitrary Style
A simpler optimization objective based on local matching that combines the content structure and style textures in a single layer of the pretrained network is proposed that has desirable properties such as a simpler optimization landscape, intuitive parameter tuning, and consistent frame-by-frame performance on video.
A temporally coherent neural algorithm for artistic style transfer
A new artificial system based on an existing neural style transfer method which creates artistically stylized animations that simultaneously reproduce both the motion of the original videos that they are derived from and the unique style of a given artistic work.
Fashioning with Networks: Neural Style Transfer to Design Clothes
The neural style transfer algorithm is applied to fashion so as to synthesize new custom clothes based on a users preference and by learning the users fashion choices from a limited set of clothes from their closet.
Deep Learning for Anime Style Transfer
A novel anime style transfer algorithm using deep neural network, which treats foreground and background differently, and could transfer the style for video with a style image and combine optical flow to ensure frame coherence in a video.
Recognizing Art Style Automatically in Painting with Deep Learning
The use of deep residual neural is investigated to solve the problem of detecting the artistic style of a painting and outperform existing approaches to reach an accuracy of 62 on the Wikipaintings dataset (for 25 different style).
From Pigments to Pixels: A Comparison of Human and AI Painting
From entertainment to medicine and engineering, artificial intelligence (AI) is now being used in a wide range of fields, yet the extent to which AI can be effectively applied to the creative arts
Projecting emotions from artworks to maps using neural style transfer
The results confirmed that emotional descriptions remain the same before and after the procedure of neural style transfer, and artworks with variety of line, point and surface depictions were the most suitable algorithm inputs and achieved better visual results in representing the map content.
Artificial Intelligence Artistic Painting Mirror as Interactive Art Using Deep Neural Networks
This paper reports a summary of the project named “Basic research on interactive art using deep learning to create color expression,” which has been adopted as a private university research branding


Fast Texture Transfer
It is demonstrated how the algorithm for texture transfer between images can leverage self-similarity of complex images to increase resolution of some types of images and to create novel, artistic looking images from photographs without any prior artistic source.
Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks
A new parametric texture model based on the powerful feature spaces of convolutional neural networks optimised for object recognition is introduced and it is established that constraining a spatial summary statistic over feature maps suffices to synthesise high-quality natural textures.
Texture Synthesis Using Convolutional Neural Networks
A new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition is introduced, showing that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit.
Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition
These evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task and propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity.
Understanding deep image representations by inverting them
Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of
Recognizing Image Style
An approach to predicting style of images, and a thorough evaluation of different image features for these tasks, find that features learned in a multi-layer network generally perform best -- even when trained with object class (not style) labels.
Feature Guided Texture Synthesis (FGTS) for artistic style transfer
A novel Feature Guided Texture Synthesis (FGTS) algorithm for artistic style transfer is proposed and compared with existing example-based methods, the content of a source image is better defined in FGTS with a feature field generated from the source image.
State of the ‘Art’: A Taxonomy of Artistic Stylization Techniques for Images and Video?
This paper surveys the field of non-photorealistic rendering (NPR), focusing on techniques for transforming 2D input (images and video) into artistically stylized renderings. We first present a
Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet
This work presents a novel way of reusing existing neural networks that have been pretrained on the task of object recognition in models of fixation prediction that significantly outperforms all state-of-the-art models on the MIT Saliency Benchmark.
Image analogies
This paper describes a new framework for processing images by example, called “image analogies,” based on a simple multi-scale autoregression, inspired primarily by recent results in texture synthesis.