Corpus ID: 222310103

TM-NET: Deep Generative Networks for Textured Meshes

@article{Gao2020TMNETDG,
  title={TM-NET: Deep Generative Networks for Textured Meshes},
  author={Lin Gao and Tong Wu and Yu-Jie Yuan and Ming Lin and Yu-Kun Lai and H. Zhang},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.06217}
}
We introduce TM-NET, a novel deep generative model capable of generating meshes with detailed textures, as well as synthesizing plausible textures for a given shape. To cope with complex geometry and structure, inspired by the recently proposed SDM-NET, our method produces texture maps for individual parts, each as a deformed box, which further leads to a natural UV map with minimum distortions. To provide a generic framework for different application scenarios, we encode geometry and texture… Expand
3D-FRONT: 3D Furnished Rooms with layOuts and semaNTics
TLDR
3D-FRONT is introduced, a new, large-scale, and comprehensive repository of synthetic indoor scenes highlighted by professionally designed layouts and a large number of rooms populated by high-quality textured 3D models with style compatibility. Expand
LSD-StructureNet: Modeling Levels of Structural Detail in 3D Part Hierarchies
Generative models for 3D shapes represented by hierarchies of parts can generate realistic and diverse sets of outputs. However, existing models suffer from the key practical limitation of modelling… Expand
Holistic 3D Human and Scene Mesh Estimation from Single View Images
TLDR
This work proposes an end-to-end trainable model that perceives the 3D scene from a single RGB image, estimates the camera pose and the room layout, and reconstructs both human body and object meshes and is the first model that outputs both object and human predictions at the mesh level, and performs joint optimization on the scene and human poses. Expand

References

SHOWING 1-10 OF 67 REFERENCES
GeLaTO: Generative Latent Textured Objects
TLDR
Generative Latent Textured Objects (GeLaTO), a compact representation that combines a set of coarse shape proxies defining low frequency geometry with learned neural textures, to encode both medium and fine scale geometry as well as view-dependent appearance. Expand
Texture Fields: Learning Texture Representations in Function Space
TLDR
Texture Fields, a novel texture representation which is based on regressing a continuous 3D function parameterized with a neural network is proposed, which is able to represent high frequency texture and naturally blend with modern deep learning techniques. Expand
Variational Autoencoders for Deforming 3D Mesh Models
TLDR
This paper proposes a novel framework which is able to learn a reasonable representation for a collection of deformable shapes, and produce competitive results for a variety of applications, including shape generation, shape interpolation, shape space embedding and shape exploration, outperforming state-of-the-art methods. Expand
Learning to Generate Textures on 3D Meshes
TLDR
This work proposes a framework for texturing meshes from multiview images by using 2.5D information rendered using the 3D models along with user inputs as an intermediate view dependent representation, which is used to generate realistic textures for particular views in an unpaired manner. Expand
Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images
TLDR
An end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. Expand
Visual Object Networks: Image Generation with Disentangled 3D Representations
TLDR
A new generative model, Visual Object Networks (VONs), synthesizing natural images of objects with a disentangled 3D representation that enables many 3D operations such as changing the viewpoint of a generated image, shape and texture editing, linear interpolation in texture and shape space, and transferring appearance across different objects and viewpoints. Expand
Non-stationary texture synthesis by adversarial expansion
TLDR
This paper proposes a new approach for example-based non-stationary texture synthesis that uses a generative adversarial network (GAN), trained to double the spatial extent of texture blocks extracted from a specific texture exemplar, and demonstrates that it can cope with challenging textures, which no other existing method can handle. Expand
Photorealistic Facial Texture Inference Using Deep Neural Networks
TLDR
A data-driven inference method is presented that can synthesize a photorealistic texture map of a complete 3D face model given a partial 2D view of a person in the wild and successful face reconstructions from a wide range of low resolution input images are demonstrated. Expand
Matryoshka Networks: Predicting 3D Geometry via Nested Shape Layers
TLDR
Novel, efficient 2D encodings for 3D geometry enable reconstructing full 3D shapes from a single image at high resolution, and clearly outperform previous octree-based approaches despite having a much simpler architecture using standard network components. Expand
Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer
TLDR
A differentiable rendering framework which allows gradients to be analytically computed for all pixels in an image and to view foreground rasterization as a weighted interpolation of local properties and background rasterized as a distance-based aggregation of global geometry. Expand
...
1
2
3
4
5
...