Single Image Texture Translation for Data Augmentation
@article{Li2021SingleIT, title={Single Image Texture Translation for Data Augmentation}, author={Boyi Li and Yin Cui and Tsung-Yi Lin and Serge J. Belongie}, journal={ArXiv}, year={2021}, volume={abs/2106.13804} }
Recent advances in image synthesis enables one to translate images by learning the mapping between a source domain and a target domain. Existing methods tend to learn the distributions by training a model on a variety of datasets, with results evaluated largely in a subjective manner. Relatively few works in this area, however, study the potential use of semantic image translation methods for image recognition tasks. In this paper, we explore the use of Single Image Texture Translation (SITT…
Figures and Tables from this paper
References
SHOWING 1-10 OF 57 REFERENCES
Contrastive Learning for Unpaired Image-to-Image Translation
- Computer ScienceECCV
- 2020
The framework enables one-sided translation in the unpaired image-to-image translation setting, while improving quality and reducing training time, and can be extended to the training setting where each "domain" is only a single image.
Multimodal Unsupervised Image-to-Image Translation
- Computer ScienceECCV
- 2018
A Multimodal Unsupervised Image-to-image Translation (MUNIT) framework that assumes that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties.
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
- Computer ScienceECCV
- 2016
This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
TuiGAN: Learning Versatile Image-to-Image Translation with Two Unpaired Images
- Computer ScienceECCV
- 2020
This paper proposes TuiGAN, a generative model that is trained on only two unpaired images and amounts to one-shot unsupervised learning that is capable of achieving comparable performance with the state-of-the-art UI2I models trained with sufficient data.
Image Style Transfer Using Convolutional Neural Networks
- Computer Science, Art2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
A Neural Algorithm of Artistic Style is introduced that can separate and recombine the image content and style of natural images and provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.
On Feature Normalization and Data Augmentation
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
This paper proposes Moment Exchange, an implicit data augmentation method that encourages the model to utilize the moment information also for recognition models, and replaces the moments of the learned features of one training image by those of another, and also interpolate the target labels to extract training signal from the moments.
STaDA: Style Transfer as Data Augmentation
- Computer ScienceVISIGRAPP
- 2019
This work explores the state-of-the-art neural style transfer algorithms and applies them as a data augmentation method on Caltech 101 and Caltech 256 dataset, and finds around 2% improvement from 83% to 85% of the image classification accuracy with VGG16, compared with traditionalData augmentation strategies.
SinGAN: Learning a Generative Model From a Single Natural Image
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is…
Diverse Image-to-Image Translation via Disentangled Representations
- Computer ScienceECCV
- 2018
This work presents an approach based on disentangled representation for producing diverse outputs without paired training images, and proposes to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and adomain-specific attribute space.
Fixing the train-test resolution discrepancy
- Computer ScienceNeurIPS
- 2019
It is experimentally validated that, for a target test resolution, using a lower train resolution offers better classification at test time, and a simple yet effective and efficient strategy to optimize the classifier performance when the train and test resolutions differ is proposed.