• Corpus ID: 16728483

Texture Networks: Feed-forward Synthesis of Textures and Stylized Images

@inproceedings{Ulyanov2016TextureNF,
  title={Texture Networks: Feed-forward Synthesis of Textures and Stylized Images},
  author={Dmitry Ulyanov and Vadim Lebedev and Andrea Vedaldi and Victor S. Lempitsky},
  booktitle={ICML},
  year={2016}
}
Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods require a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer… 

Texture Attribute Synthesis and Transfer Using Feed-Forward CNNs

This work learns feed-forward image generators that correspond to specification of styles and textures in terms of high-level describable attributes such as 'striped', 'dotted', or 'veined', allowing for real-time video processing.

Diversified Texture Synthesis with Feed-Forward Networks

A deep generative feed-forward network is proposed which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them and a suite of important techniques are introduced to achieve better convergence and diversity.

Fast Texture Synthesis via Pseudo Optimizer

  • Wu ShiY. Qiao
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
This work proposes a new efficient method that aims to simulate the optimization process while retains most of the properties, and can synthesize images with better quality and diversity than the other fast synthesis methods do.

TextureGAN: Controlling Deep Image Synthesis with Texture Patches

This paper is the first to examine texture control in deep image synthesis guided by sketch, color, and texture and develops a local texture loss in addition to adversarial and content loss to train the generative network.

GramGAN: Deep 3D Texture Synthesis From 2D Exemplars

A novel texture synthesis framework is presented, enabling the generation of infinite, high-quality 3D textures given a 2D exemplar image and a novel loss function that combines ideas from both style transfer and generative adversarial networks is proposed.

Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer

A multimodal convolutional neural network is proposed that takes into consideration faithful representations of both color and luminance channels, and performs stylization hierarchically with multiple losses of increasing scales, and can perform style transfer in nearly real-time by performing much more sophisticated training offline.

Two-Stream Convolutional Networks for Dynamic Texture Synthesis

A two-stream model for dynamic texture synthesis based on pre-trained convolutional networks that target two independent tasks: object recognition, and optical flow prediction that generates novel, high quality samples that match both the framewise appearance and temporal evolution of input texture.

Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses

This paper first gives a mathematical explanation of the source of instabilities in many previous approaches, and then improves these instabilities by using histogram losses to synthesize textures that better statistically match the exemplar.

Neural FFTs for Universal Texture Image Synthesis

It is found that texture synthesis can be viewed as (local) upsampling in the Fast Fourier Transform (FFT) domain, however, FFT of natural images exhibits high dynamic range and lacks local correlations.

Texture Synthesis with Spatial Generative Adversarial Networks

This is the first successful completely data-driven texture synthesis method based on GANs, and has the following features which make it a state of the art algorithm for texture synthesis: high image quality of the generated textures, very high scalability w.r.t. the output texture size, fast real-time forward generation.
...

References

SHOWING 1-10 OF 20 REFERENCES

A Neural Algorithm of Artistic Style

This work introduces an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality and offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.

Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks

A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.

Understanding deep image representations by inverting them

Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of

Return of the Devil in the Details: Delving Deep into Convolutional Nets

It is shown that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost, and it is identified that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance.

Learning to generate chairs with convolutional neural networks

This work trains a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color and shows that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task.

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

Generative Moment Matching Networks

This work forms a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks, using MMD to learn to generate codes that can then be decoded to produce samples.

Fully convolutional networks for semantic segmentation

The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.

A Parametric Texture Model Based on Joint Statistics of Complex Wavelet Coefficients

A universal statistical model for texture images in the context of an overcomplete complex wavelet transform is presented, demonstrating the necessity of subgroups of the parameter set by showing examples of texture synthesis that fail when those parameters are removed from the set.