• Corpus ID: 226226434

Pose Randomization for Weakly Paired Image Style Translation

  title={Pose Randomization for Weakly Paired Image Style Translation},
  author={Zexi Chen and Jiaxin Guo and Xuecheng Xu and Yunkai Wang and Yue Wang and Rong Xiong},
Utilizing the trained model under different conditions without data annotation is attractive for robot applications. Towards this goal, one class of methods is to translate the image style from the training environment to the current one. Conventional studies on image style translation mainly focus on two settings: paired data on images from two domains with exactly aligned content, and unpaired data, with independent content. In this paper, we would like to propose a new setting, where the… 
1 Citations

Collaborative Recognition of Feasible Region with Aerial and Ground Robots through DPCN

This work presents collaboration of aerial and ground robots in recognition of feasible region, and utilizes the state-ofthe-art research achievements in matching heterogeneous sensor measurements called deep phase correlation network(DPCN), which has excellent performance on heterogeneous mapping, to refine the transformation.



StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation

A unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network, which leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain.

Image-to-Image Translation with Conditional Adversarial Networks

Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

DRIT++: Diverse Image-to-Image Translation via Disentangled Representations

This work presents an approach based on disentangled representation for generating diverse outputs without paired training images that can generate diverse and realistic images on a wide range of tasks without pairedTraining data.

Unsupervised Image-to-Image Translation Networks

This work makes a shared-latent space assumption and proposes an unsupervised image-to-image translation framework based on Coupled GANs that achieves state-of-the-art performance on benchmark datasets.

Multimodal Unsupervised Image-to-Image Translation

A Multimodal Unsupervised Image-to-image Translation (MUNIT) framework that assumes that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties.

StarGAN v2: Diverse Image Synthesis for Multiple Domains

StarGAN v2, a single framework that tackles image-to-image translation models with limited diversity and multiple models for all domains, is proposed and shows significantly improved results over the baselines.

Self-Supervised GANs via Auxiliary Rotation Loss

This work allows the networks to collaborate on the task of representation learning, while being adversarial with respect to the classic GAN game, and takes a step towards bridging the gap between conditional and unconditional GANs.

Domain Randomization and Pyramid Consistency: Simulation-to-Real Generalization Without Accessing Target Domain Data

A new approach of domain randomization and pyramid consistency to learn a model with high generalizability for semantic segmentation of real-world self-driving scenes in a domain generalization fashion is proposed.

U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation

A novel method for unsupervised image- to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner, which can translate both images requiring holistic changes and images requiring large shape changes.

Adversarial Feature Disentanglement for Place Recognition Across Changing Appearance

This paper proposes to use the adversarial network to disentangle domain-unrelated and domain-related features, which are named place and appearance features respectively, which can be used as image descriptors to match between images collected at different conditions.