Corpus ID: 208075894

Multiple Style-Transfer in Real-Time

  title={Multiple Style-Transfer in Real-Time},
  author={Michael C. Maring and Kaustav Chakraborty},
Style transfer aims to combine the content of one image with the artistic style of another. It was discovered that lower levels of convolutional networks captured style information, while higher levels captures content information. The original style transfer formulation used a weighted combination of VGG-16 layer activations to achieve this goal. Later, this was accomplished in real-time using a feed-forward network to learn the optimal combination of style and content features from the… Expand


Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization
This paper presents a simple yet effective approach that for the first time enables arbitrary style transfer in real-time, comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. Expand
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time. Expand
Exploring the structure of a real-time, arbitrary neural artistic stylization network
A method which combines the flexibility of the neural algorithm of artistic style with the speed of fast style transfer networks to allow real-time stylization using any content/style image pair and is successfully trained on a corpus of roughly 80,000 paintings. Expand
A Neural Algorithm of Artistic Style
This work introduces an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality and offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery. Expand
A Learned Representation For Artistic Style
It is demonstrated that such a network generalizes across a diversity of artistic styles by reducing a painting to a point in an embedding space and permits a user to explore new painting styles by arbitrarily combining the styles learned from individual paintings. Expand
U-Net: Convolutional Networks for Biomedical Image Segmentation
It is shown that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Expand
Conditional Random Fields as Recurrent Neural Networks
A new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling is introduced, and top results are obtained on the challenging Pascal VOC 2012 segmentation benchmark. Expand
Instance Normalization: The Missing Ingredient for Fast Stylization
A small change in the stylization architecture results in a significant qualitative improvement in the generated images, and can be used to train high-performance architectures for real-time image generation. Expand
Deep Residual Learning for Image Recognition
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand
ProdSumNet: reducing model parameters in deep neural networks via product-of-sums matrix decompositions
  • C. Wu
  • Computer Science, Mathematics
  • ArXiv
  • 2018
It is shown that good accuracy on MNIST and Fashion MNIST can be obtained using a relatively small number of trainable parameters, and an approach in the transform domain that obviates the need for convolutional layers is considered. Expand