Corpus ID: 236318316

Image-to-Image Translation with Low Resolution Conditioning

@article{Abid2021ImagetoImageTW,
  title={Image-to-Image Translation with Low Resolution Conditioning},
  author={Mohamed Abid and Ihsen Hedhli and Jean-François Lalonde and Christian Gagn{\'e}},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.11262}
}
Most image-to-image translation methods focus on learning mappings across domains with the assumption that images share content (e.g., pose) but have their own domain-specific information known as style. When conditioned on a target image, such methods aim to extract the style of the target and combine it with the content of the source image. In this work, we consider the scenario where the target image has a very low resolution. More specifically, our approach aims at transferring fine details… Expand

Figures and Tables from this paper

Identity-Guided Face Generation with Multi-modal Contour Conditions
  • Q. Bai, Weihao Xia, Fei Yin, Yujiu Yang
  • Computer Science
  • 2021
TLDR
This work proposes a novel dual-encoder architecture, in which an identity encoder extracts the identity-related feature, accompanied by a main encoder to obtain the rough contour information and further fuse all the information together, which achieves identity-guided face generation conditioned on multi-modal contour images. Expand

References

SHOWING 1-10 OF 45 REFERENCES
Conditional Image-to-Image Translation
TLDR
This paper twists two conditional translation models together for inputs combination and reconstruction while preserving domain independent features and carries out experiments on men's faces from-to women's faces translation and edges to shoes&bags translations. Expand
Multimodal Unsupervised Image-to-Image Translation
TLDR
A Multimodal Unsupervised Image-to-image Translation (MUNIT) framework that assumes that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. Expand
Unsupervised Image-to-Image Translation Networks
TLDR
This work makes a shared-latent space assumption and proposes an unsupervised image-to-image translation framework based on Coupled GANs that achieves state-of-the-art performance on benchmark datasets. Expand
Diverse Image-to-Image Translation via Disentangled Representations
TLDR
This work presents an approach based on disentangled representation for producing diverse outputs without paired training images, and proposes to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and adomain-specific attribute space. Expand
StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation
TLDR
A unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network, which leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. Expand
Image Super-Resolution by Neural Texture Transfer
TLDR
An end-to-end deep model which enriches HR details by adaptively transferring the texture from Ref images according to their textural similarity is designed, which facilitates multi-scale neural transfer that allows the model to benefit more from those semantically related Ref patches, and gracefully degrade to SISR performance on the least relevant Ref inputs. Expand
StarGAN v2: Diverse Image Synthesis for Multiple Domains
TLDR
StarGAN v2, a single framework that tackles image-to-image translation models with limited diversity and multiple models for all domains, is proposed and shows significantly improved results over the baselines. Expand
Image-to-Image Translation with Conditional Adversarial Networks
TLDR
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Expand
Creating High Resolution Images with a Latent Adversarial Generator
TLDR
This work proposes to produce samples of high resolution images given extremely small inputs with a new method called Latent Adversarial Generator (LAG), which learns exclusively in the latent space of the adversary using perceptual loss -- it does not have a pixel loss. Expand
Analyzing and Improving the Image Quality of StyleGAN
TLDR
This work redesigns the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images, and thereby redefines the state of the art in unconditional image modeling. Expand
...
1
2
3
4
5
...