Corpus ID: 236318307

Human Pose Transfer with Disentangled Feature Consistency

  title={Human Pose Transfer with Disentangled Feature Consistency},
  author={Kun Wu and Chengxiang Yin and Zhengping Che and Bo Jiang and Jian Tang and Zheng Guan and Gangyi Ding},
  • Kun Wu, Chengxiang Yin, +4 authors Gangyi Ding
  • Published 2021
  • Computer Science
  • ArXiv
Deep generative models have made great progress in synthesizing images with arbitrary human poses and transferring poses of one person to others. However, most existing approaches explicitly leverage the pose information extracted from the source images as a conditional input for the generative networks. Meanwhile, they usually focus on the visual fidelity of the synthesized images but neglect the inherent consistency, which further confines their performance of pose transfer. To alleviate the… Expand
Deep Person Generation: A Survey from the Perspective of Face, Pose and Cloth Synthesis
  • Tong Sha, Wei Zhang, Tong Shen, Zhoujun Li, Tao Mei
  • Computer Science
  • ArXiv
  • 2021
The scope of person generation is summarized, and a systematically review recent progress and technical trends in deep person generation are reviewed, covering three major tasks: talking-head generation (face), pose-guided person generation (pose) and garment-oriented persongeneration (cloth). Expand


Disentangled Person Image Generation
A novel, two-stage reconstruction pipeline is proposed that learns a disentangled representation of the aforementioned image factors and generates novel person images at the same time and can manipulate the foreground, background and pose of the input image, and also sample new embedding features to generate targeted manipulations, that provide more control over the generation process. Expand
Progressive Pose Attention Transfer for Person Image Generation
A new generative adversarial network to the problem of pose transfer, i.e., transferring the pose of a given person to a target one, which can generate training images for person re-identification, alleviating data insufficiency. Expand
Dense Pose Transfer
This work proposes a combination of surface-based pose estimation and deep generative models that allows us to perform accurate pose transfer, i.e. synthesize a new image of a person based on a single image of that person and theimage of a pose donor. Expand
Multistage Adversarial Losses for Pose-Based Human Image Synthesis
  • Chenyang Si, W. Wang, Liang Wang, T. Tan
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
This paper proposes a pose-based human image synthesis method which can keep the human posture unchanged in novel viewpoints and adopt multistage adversarial losses separately for the foreground and background generation, which fully exploits the multi-modal characteristics of generative loss to generate more realistic looking images. Expand
Synthesizing Images of Humans in Unseen Poses
A modular generative neural network is presented that synthesizes unseen poses using training pairs of images and poses taken from human action videos, separates a scene into different body part and background layers, moves body parts to new locations and refines their appearances, and composites the new foreground with a hole-filled background. Expand
Dense Intrinsic Appearance Flow for Human Pose Transfer
We present a novel approach for the task of human pose transfer, which aims at synthesizing a new image of a person from an input image of that person and a target pose. We address the issues ofExpand
Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis
A 3D body mesh recovery module is proposed to disentangle the pose and shape, which can not only model the joint location and rotation but also characterize the personalized body shape and is able to support a more flexible warping from multiple sources. Expand
Deformable GANs for Pose-Based Human Image Generation
This paper introduces deformable skip connections in the generator of the Generative Adversarial Network and proposes a nearest-neighbour loss instead of the common L1 and L2 losses in order to match the details of the generated image with the target image. Expand
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing. Expand
Pose Guided Person Image Generation
The novel Pose Guided Person Generation Network (PG$^2$) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose, is proposed. Expand