Corpus ID: 16728483

Texture Networks: Feed-forward Synthesis of Textures and Stylized Images

@inproceedings{Ulyanov2016TextureNF,
  title={Texture Networks: Feed-forward Synthesis of Textures and Stylized Images},
  author={Dmitry Ulyanov and V. Lebedev and A. Vedaldi and V. Lempitsky},
  booktitle={ICML},
  year={2016}
}
Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods require a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer… Expand
Improved Texture Networks: Maximizing Quality and Diversity in Feed-Forward Stylization and Texture Synthesis
TLDR
This work introduces an instance normalization module to replace batch normalization with significant improvements to the quality of image stylization and improves diversity by introducing a new learning formulation that encourages generators to sample unbiasedly from the Julesz texture ensemble. Expand
Texture Attribute Synthesis and Transfer Using Feed-Forward CNNs
TLDR
This work learns feed-forward image generators that correspond to specification of styles and textures in terms of high-level describable attributes such as 'striped', 'dotted', or 'veined', allowing for real-time video processing. Expand
Diversified Texture Synthesis with Feed-Forward Networks
TLDR
A deep generative feed-forward network is proposed which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them and a suite of important techniques are introduced to achieve better convergence and diversity. Expand
Fast Texture Synthesis via Pseudo Optimizer
  • Wu Shi, Y. Qiao
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
This work proposes a new efficient method that aims to simulate the optimization process while retains most of the properties, and can synthesize images with better quality and diversity than the other fast synthesis methods do. Expand
TextureGAN: Controlling Deep Image Synthesis with Texture Patches
TLDR
This paper is the first to examine texture control in deep image synthesis guided by sketch, color, and texture and develops a local texture loss in addition to adversarial and content loss to train the generative network. Expand
GramGAN: Deep 3D Texture Synthesis From 2D Exemplars
TLDR
A novel texture synthesis framework is presented, enabling the generation of infinite, high-quality 3D textures given a 2D exemplar image and a novel loss function that combines ideas from both style transfer and generative adversarial networks is proposed. Expand
Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer
TLDR
A multimodal convolutional neural network is proposed that takes into consideration faithful representations of both color and luminance channels, and performs stylization hierarchically with multiple losses of increasing scales, and can perform style transfer in nearly real-time by performing much more sophisticated training offline. Expand
Two-Stream Convolutional Networks for Dynamic Texture Synthesis
TLDR
A two-stream model for dynamic texture synthesis based on pre-trained convolutional networks that target two independent tasks: object recognition, and optical flow prediction that generates novel, high quality samples that match both the framewise appearance and temporal evolution of input texture. Expand
Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses
TLDR
This paper first gives a mathematical explanation of the source of instabilities in many previous approaches, and then improves these instabilities by using histogram losses to synthesize textures that better statistically match the exemplar. Expand
Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks
TLDR
Markovian Generative Adversarial Networks (MGANs) are proposed, a method for training generative networks for efficient texture synthesis that surpasses previous neural texture synthesizers by a significant margin and applies to texture synthesis, style transfer, and video stylization. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 20 REFERENCES
A Neural Algorithm of Artistic Style
TLDR
This work introduces an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality and offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery. Expand
Texture Synthesis Using Convolutional Neural Networks
TLDR
A new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition is introduced, showing that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit. Expand
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. Expand
Understanding deep image representations by inverting them
Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding ofExpand
Return of the Devil in the Details: Delving Deep into Convolutional Nets
TLDR
It is shown that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost, and it is identified that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. Expand
Learning to generate chairs with convolutional neural networks
TLDR
This work trains a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color and shows that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task. Expand
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. Expand
Generative Moment Matching Networks
TLDR
This work forms a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks, using MMD to learn to generate codes that can then be decoded to produce samples. Expand
Fully Convolutional Networks for Semantic Segmentation
TLDR
It is shown that convolutional networks by themselves, trained end- to-end, pixels-to-pixels, improve on the previous best result in semantic segmentation. Expand
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning. Expand
...
1
2
...