Neural Implicit 3D Shapes from Single Images with Spatial Patterns
@article{Zhuang2021NeuralI3, title={Neural Implicit 3D Shapes from Single Images with Spatial Patterns}, author={Yixin Zhuang and Yunzhe Liu and Baoquan Chen}, journal={ArXiv}, year={2021}, volume={abs/2106.03087} }
3D shape reconstruction from a single image has been a long-standing problem in computer vision. The problem is ill-posed and highly challenging due to the information loss and occlusion that occurred during the imagery capture. In contrast to previous methods that learn holistic shape priors, we propose a method to learn spatial pattern priors for inferring the invisible regions of the underlying shape, wherein each 3D sample in the implicit shape representation is associated with a set of…
Figures and Tables from this paper
References
SHOWING 1-10 OF 45 REFERENCES
DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction
- Computer ScienceNeurIPS
- 2019
DISN, a Deep Implicit Surface Network which can generate a high-quality detail-rich 3D mesh from an 2D image by predicting the underlying signed distance fields by combining global and local features, achieves the state-of-the-art single-view reconstruction performance.
A Point Set Generation Network for 3D Object Reconstruction from a Single Image
- Computer Science2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
This paper addresses the problem of 3D reconstruction from a single image, generating a straight-forward form of output unorthordox, and designs architecture, loss function and learning paradigm that are novel and effective, capable of predicting multiple plausible 3D point clouds from an input image.
Learning Shape Priors for Single-View 3D Completion and Reconstruction
- Computer ScienceECCV
- 2018
The proposed ShapeHD pushes the limit of single-view shape completion and reconstruction by integrating deep generative models with adversarially learned shape priors, penalizing the model only if its output is unrealistic, not if it deviates from the ground truth.
Im2Struct: Recovering 3D Shape Structure from a Single RGB Image
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This work develops a convolutional-recursive auto-encoder comprised of structure parsing of a 2D image followed by structure recovering of a cuboid hierarchy, which achieves unprecedentedly faithful and detailed recovery of diverse 3D part structures from single-view 2D images.
Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A novel model is designed that simultaneously performs 3D reconstruction and pose estimation; this multi-task learning approach achieves state-of-the-art performance on both tasks.
3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction
- Computer ScienceECCV
- 2016
The 3D-R2N2 reconstruction framework outperforms the state-of-the-art methods for single view reconstruction, and enables the 3D reconstruction of objects in situations when traditional SFM/SLAM methods fail (because of lack of texture and/or wide baseline).
DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.
Occupancy Networks: Learning 3D Reconstruction in Function Space
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input.
D2IM-Net: Learning Detail Disentangled Implicit Fields from Single Images
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
The final 3D reconstruction is a fusion between the base shape and the displacement maps, with three losses enforcing the recovery of coarse shape, overall structure, and surface details via a novel Laplacian term.
Learning Implicit Fields for Generative Shape Modeling
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
By replacing conventional decoders by the implicit decoder for representation learning and shape generation, this work demonstrates superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality.