AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis

  title={AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis},
  author={Zhiqin Chen and K. Yin and Sanja Fidler},
In this paper, we address the problem of texture representation for 3D shapes for the challenging and under-explored tasks of texture transfer and synthesis. Previous works either apply spherical texture maps which may lead to large distortions, or use continuous texture fields that yield smooth outputs lacking details. We argue that the traditional way of representing textures with images and link-ing them to a 3D mesh via UV mapping is more desirable, since synthesizing 2D images is a well… 


Texture Fields: Learning Texture Representations in Function Space
Texture Fields, a novel texture representation which is based on regressing a continuous 3D function parameterized with a neural network is proposed, which is able to represent high frequency texture and naturally blend with modern deep learning techniques.
Deep Residual Learning for Image Recognition
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
ShapeNet: An Information-Rich 3D Model Repository
ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy, a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations.
Deformed Implicit Field: Modeling 3D Shapes with Learned Dense Correspondence
This work proposes a novel Deformed Implicit Field (DIF) representation for modeling 3D shapes of a category and generating dense correspondences among shapes and demonstrates several applications such as texture transfer and shape editing, where the method achieves compelling results that cannot be achieved by previous methods.
Analyzing and Improving the Image Quality of StyleGAN
This work redesigns the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images, and thereby redefines the state of the art in unconditional image modeling.
Image quality assessment: from error visibility to structural similarity
A structural similarity index is developed and its promise is demonstrated through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis
We introduce DMTET, a deep 3D conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels. It marries the merits of implicit and explicit
3DStyleNet: Creating 3D Shapes with Geometric and Texture Style Variations
Extensive quantitative analysis shows that 3DSTYLENET outperforms alternative data augmentation techniques for the downstream task of single-image 3D reconstruction and can serve as a valuable tool to create 3D data augmentations for computer vision tasks.
View Generalization for Single Image Textured 3D Models
A cycle consistency loss is described that improves view generalization and encourages model textures to be aligned, so as to encourage sharing in 3D geometry models.
Semi-supervised Synthesis of High-Resolution Editable Textures for 3D Humans
A novel approach to generate diverse high fidelity texture maps for 3D human meshes in a semi-supervised setup by proposing a Region-adaptive Adversarial Variational AutoEncoder (ReAVAE) that learns the probability distribution of the style of each region individually so that the Style of the generated texture can be controlled by sampling from the region-specific distributions.