Corpus ID: 237195066

Deep Hybrid Self-Prior for Full 3D Mesh Generation

  title={Deep Hybrid Self-Prior for Full 3D Mesh Generation},
  author={Xingkui Wei and Zhengqing Chen and Yanwei Fu and Zhaopeng Cui and Yinda Zhang},
We present a deep learning pipeline that leverages network self-prior to recover a full 3D model consisting of both a triangular mesh and a texture map from the colored 3D point cloud. Different from previous methods either exploiting 2D self-prior for image editing or 3D self-prior for pure surface reconstruction, we propose to exploit a novel hybrid 2D-3D self-prior in deep neural networks to significantly improve the geometry quality and produce a high-resolution texture map, which is… Expand

Figures and Tables from this paper


Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images
An end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. Expand
Image2Mesh: A Learning Framework for Single Image 3D Reconstruction
This paper demonstrates that a mesh representation (i.e. vertices and faces to form polygonal surfaces) is able to capture fine-grained geometry for 3D reconstruction tasks and proposes a learning framework to infer the parameters of a compact mesh representation rather than learning from the mesh itself. Expand
Pixel2Mesh++: Multi-View 3D Mesh Generation via Deformation
This model learns to predict series of deformations to improve a coarse shape iteratively and exhibits generalization capability across different semantic categories, number of input images, and quality of mesh initialization. Expand
Im2Avatar: Colorful 3D Reconstruction from a Single Image
This work proposes an end-to-end trainable framework, Colorful Voxel Network (CVN), to tackle the problem of simultaneously recovering 3D shape and surface color from a single image, namely "colorful 3D reconstruction". Expand
3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction
The 3D-R2N2 reconstruction framework outperforms the state-of-the-art methods for single view reconstruction, and enables the 3D reconstruction of objects in situations when traditional SFM/SLAM methods fail (because of lack of texture and/or wide baseline). Expand
A Point Set Generation Network for 3D Object Reconstruction from a Single Image
This paper addresses the problem of 3D reconstruction from a single image, generating a straight-forward form of output unorthordox, and designs architecture, loss function and learning paradigm that are novel and effective, capable of predicting multiple plausible 3D point clouds from an input image. Expand
Neural 3D Mesh Renderer
This work proposes an approximate gradient for rasterization that enables the integration of rendering into neural networks and performs gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time. Expand
Occupancy Networks: Learning 3D Reconstruction in Function Space
This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input. Expand
Photorealistic Facial Texture Inference Using Deep Neural Networks
A data-driven inference method is presented that can synthesize a photorealistic texture map of a complete 3D face model given a partial 2D view of a person in the wild and successful face reconstructions from a wide range of low resolution input images are demonstrated. Expand
Learning Category-Specific Mesh Reconstruction from Image Collections
A learning framework for recovering the 3D shape, camera, and texture of an object from a single image by incorporating texture inference as prediction of an image in a canonical appearance space and shows that semantic keypoints can be easily associated with the predicted shapes. Expand