• Corpus ID: 235652406

Single Image Texture Translation for Data Augmentation

  title={Single Image Texture Translation for Data Augmentation},
  author={Boyi Li and Yin Cui and Tsung-Yi Lin and Serge J. Belongie},
Recent advances in image synthesis enables one to translate images by learning the mapping between a source domain and a target domain. Existing methods tend to learn the distributions by training a model on a variety of datasets, with results evaluated largely in a subjective manner. Relatively few works in this area, however, study the potential use of semantic image translation methods for image recognition tasks. In this paper, we explore the use of Single Image Texture Translation (SITT… 


Contrastive Learning for Unpaired Image-to-Image Translation
The framework enables one-sided translation in the unpaired image-to-image translation setting, while improving quality and reducing training time, and can be extended to the training setting where each "domain" is only a single image.
Multimodal Unsupervised Image-to-Image Translation
A Multimodal Unsupervised Image-to-image Translation (MUNIT) framework that assumes that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties.
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
TuiGAN: Learning Versatile Image-to-Image Translation with Two Unpaired Images
This paper proposes TuiGAN, a generative model that is trained on only two unpaired images and amounts to one-shot unsupervised learning that is capable of achieving comparable performance with the state-of-the-art UI2I models trained with sufficient data.
Image Style Transfer Using Convolutional Neural Networks
A Neural Algorithm of Artistic Style is introduced that can separate and recombine the image content and style of natural images and provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.
On Feature Normalization and Data Augmentation
This paper proposes Moment Exchange, an implicit data augmentation method that encourages the model to utilize the moment information also for recognition models, and replaces the moments of the learned features of one training image by those of another, and also interpolate the target labels to extract training signal from the moments.
STaDA: Style Transfer as Data Augmentation
This work explores the state-of-the-art neural style transfer algorithms and applies them as a data augmentation method on Caltech 101 and Caltech 256 dataset, and finds around 2% improvement from 83% to 85% of the image classification accuracy with VGG16, compared with traditionalData augmentation strategies.
SinGAN: Learning a Generative Model From a Single Natural Image
We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is
Diverse Image-to-Image Translation via Disentangled Representations
This work presents an approach based on disentangled representation for producing diverse outputs without paired training images, and proposes to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and adomain-specific attribute space.
Fixing the train-test resolution discrepancy
It is experimentally validated that, for a target test resolution, using a lower train resolution offers better classification at test time, and a simple yet effective and efficient strategy to optimize the classifier performance when the train and test resolutions differ is proposed.