Image Style Transfer Using Convolutional Neural Networks

  title={Image Style Transfer Using Convolutional Neural Networks},
  author={Leon A. Gatys and Alexander S. Ecker and Matthias Bethge},
  journal={2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
Rendering the semantic content of an image in different styles is a difficult image processing task. [] Key Method Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images.

Figures from this paper

Image Style Transfer with Multi-target Loss for loT Applications

  • Cui WangMingxing He
  • Computer Science
    2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN)
  • 2018
This paper introduces an artificial system to separate and recombine the content and style of arbitrary images, providing a neural algorithm for the creation of artistic images.

Image Style Transfer with Feature Extraction Algorithm using Deep Learning

  • Yuan LiuF. E. MunsayacN. BugtaiR. Baldovino
  • Computer Science, Art
    2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM)
  • 2021
A new algorithm based on the concept of deep learning in achieving image style transfer by merging content image and style image to obtain the stylized image is introduced.

Automatic semantic style transfer using deep convolutional neural networks and soft masks

This paper proposes a novel method based on automatically segmenting the objects and extracting their soft semantic masks from the style and content images, in order to preserve the structure of the content image while having the style transferred.

Laplacian-Steered Neural Style Transfer

By incorporating the Laplacian loss, a new optimization objective for neural style transfer named Lapstyle is obtained, which will produce a stylized image that better preserves the detail structures of the content image and eliminates the artifacts.

An Exploration of Style Transfer Using Deep Neural Networks

This thesis presents the implementation and analysis of several techniques for performing artistic style transfer using a Convolutional Neural Network architecture trained for large-scale image recognition tasks.

Style Transfer with Content Preservation from Multiple Images

This work proposes a framework based on neural patches matching that combines the content structure and style textures in a fusion layer of the network that is capable to extract the style from a group of images, such as the paintings of specific painter.

Automated Deep Photo Style Transfer

An automated segmentation process is presented that consists of a neural network based segmentation method followed by a semantic grouping step that becomes completely independent from any user interaction, which allows for new applications.

Improving Semantic Style Transfer Using Guided Gram Matrices

This work investigates semantic style transfer for content images with more than 2 semantic regions by combining guided Gram matrices with gradient capping and multi-scale representations, which simplifies the parameter tuning problem, improves the style transfer results and is faster than current semantic methods.

Image Neural Style Transfer With Preserving the Salient Regions

By adding the region loss calculated from a localization network, the synthetic image can almost keep the main salient regions consistent with that of original content image, which helps for saliency-based tasks such as object localization and classification.

Unsupervised Image Decomposition in Vector Layers

This paper proposes a new deep image reconstruction paradigm where the outputs are composed from simple layers, defined by their color and a vector transparency mask, which presents a number of advantages compared to the commonly used convolutional network architectures.



Texture Synthesis Using Convolutional Neural Networks

A new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition is introduced, showing that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit.

Understanding deep image representations by inverting them

Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of

Fast Texture Transfer

It is demonstrated how the algorithm for texture transfer between images can leverage self-similarity of complex images to increase resolution of some types of images and to create novel, artistic looking images from photographs without any prior artistic source.

Recognizing Image Style

An approach to predicting style of images, and a thorough evaluation of different image features for these tasks, find that features learned in a multi-layer network generally perform best -- even when trained with object class (not style) labels.

Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps

This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets), and establishes the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks.

Image quilting for texture synthesis and transfer

This work uses quilting as a fast and very simple texture synthesis algorithm which produces surprisingly good results for a wide range of textures and extends the algorithm to perform texture transfer — rendering an object with a texture taken from a different object.

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

Deep filter banks for texture recognition and segmentation

This work proposes a new texture descriptor, FV-CNN, obtained by Fisher Vector pooling of a Convolutional Neural Network (CNN) filter bank, which substantially improves the state-of-the-art in texture, material and scene recognition.

Image analogies

This paper describes a new framework for processing images by example, called “image analogies,” based on a simple multi-scale autoregression, inspired primarily by recent results in texture synthesis.

Fully convolutional networks for semantic segmentation

The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.