• Corpus ID: 209370829

Down to the Last Detail: Virtual Try-on with Detail Carving

@article{Wang2019DownTT,
  title={Down to the Last Detail: Virtual Try-on with Detail Carving},
  author={Jiahang Wang and Wei Zhang and Weizhong Liu and Tao Mei},
  journal={ArXiv},
  year={2019},
  volume={abs/1912.06324}
}
Virtual try-on under arbitrary poses has attracted lots of research attention due to its huge potential applications. However, existing methods can hardly preserve the details in clothing texture and facial identity (face, hair) while fitting novel clothes and poses onto a person. In this paper, we propose a novel multi-stage framework to synthesize person images, where rich details in salient regions can be well preserved. Specifically, a multi-stage framework is proposed to decompose the… 

Figures and Tables from this paper

Toward Accurate and Realistic Virtual Try-on Through Shape Matching and Multiple Warps
TLDR
Qualitative evaluation confirms that for any warping method, one can choose target models automatically to improve results, and learning multiple coordinated specialized warpers offers further improvements on results.
Deep Person Generation: A Survey from the Perspective of Face, Pose and Cloth Synthesis
TLDR
The scope of person generation is summarized, and a systematically review recent progress and technical trends in deep person generation are reviewed, covering three major tasks: talking-head generation (face), pose-guided person generation (pose) and garment-oriented persongeneration (cloth).
NL-VTON: a non-local virtual try-on network with feature preserving of body and clothes
TLDR
A non-local virtual try-on network NL-VTON is proposed that introduces a non- local feature attention module and a grid regularization loss so as to capture detailed features of complex clothes, and design a human body segmentation prediction network to further alleviate the artifacts on occlusion regions.
Per Garment Capture and Synthesis for Real-time Virtual Try-on
TLDR
This paper proposes an alternative per garment capture and synthesis workflow to handle such rich interactions by training the model with many systematically captured images, and designs an actuated mannequin and an efficient capturing process that collects the detailed deformations of the target garments under diverse body sizes and poses.
Toward Accurate and Realistic Outfits Visualization with Attention to Details
TLDR
This work proposes Outfit Visualization Net (OVNet), a method for matching outfits with the most suitable model and produces substantially higher-quality studio images compared to prior works for multi-garment outfits.
TryItOut : Machine Learning Based Virtual Fashion Assistant
TLDR
Generative adversarial networks (GAN) Model has been explored for generation of the clothing image and try-on image using CVPR Dataset and performs well for the images containing obstructions too.

References

SHOWING 1-10 OF 19 REFERENCES
Toward Characteristic-Preserving Image-based Virtual Try-On Network
TLDR
A new fully-learnable Characteristic-Preserving Virtual Try-On Network (CP-VTON) for addressing all real-world challenges in this task, and achieves the state-of-the-art virtual try-on performance both qualitatively and quantitatively.
VITON: An Image-Based Virtual Try-on Network
We present an image-based VIirtual Try-On Network (VITON) without using 3D information in any form, which seamlessly transfers a desired clothing item onto the corresponding region of a person using
Towards Multi-Pose Guided Virtual Try-On Network
TLDR
This paper makes the first attempt towards a multi-pose guided virtual try-on system, which enables clothes to transfer onto a person with diverse poses, and significantly outperforms all state-of-the-art methods both qualitatively and quantitatively.
Unsupervised Person Image Synthesis in Arbitrary Poses
TLDR
A novel approach for synthesizing photorealistic images of people in arbitrary poses using generative adversarial learning, which considers a pose conditioned bidirectional generator that maps back the initially rendered image to the original pose, hence being directly comparable to the input image without the need to resort to any training image.
Unsupervised Person Image Generation With Semantic Parsing Transformation
TLDR
This paper proposes a new pathway to decompose the hard mapping into two more accessible subtasks, namely, semantic parsing transformation and appearance generation, and proposes a semantic generative network to transform between semantic parsing maps, in order to simplify the non-rigid deformation learning.
Deformable GANs for Pose-Based Human Image Generation
TLDR
This paper introduces deformable skip connections in the generator of the Generative Adversarial Network and proposes a nearest-neighbour loss instead of the common L1 and L2 losses in order to match the details of the generated image with the target image.
Progressive Pose Attention Transfer for Person Image Generation
TLDR
A new generative adversarial network to the problem of pose transfer, i.e., transferring the pose of a given person to a target one, which can generate training images for person re-identification, alleviating data insufficiency.
Pose Guided Person Image Generation
TLDR
The novel Pose Guided Person Generation Network (PG$^2$) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose, is proposed.
DeepWrinkles: Accurate and Realistic Clothing Modeling
TLDR
An entirely data-driven approach to realistic cloth wrinkle generation is claimed, which leads to unprecedented high-quality rendering of clothing deformation sequences, where fine wrinkles from (real) high resolution observations can be recovered.
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
TLDR
This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
...
1
2
...