• Corpus ID: 208158120

Inverse Graphics: Unsupervised Learning of 3D Shapes from Single Images

@article{Ucar2019InverseGU,
  title={Inverse Graphics: Unsupervised Learning of 3D Shapes from Single Images},
  author={Talip Ucar},
  journal={ArXiv},
  year={2019},
  volume={abs/1911.07937}
}
  • Talip Ucar
  • Published 31 October 2019
  • Computer Science, Mathematics
  • ArXiv
Using generative models for Inverse Graphics is an active area of research. However, most works focus on developing models for supervised and semi-supervised methods. In this paper, we study the problem of unsupervised learning of 3D geometry from single images. Our approach is to use a generative model that produces 2-D images as projections of a latent 3D voxel grid, which we train either as a variational auto-encoder or using adversarial methods. Our contributions are as follows: First, we… 

References

SHOWING 1-10 OF 47 REFERENCES
3D Shape Induction from 2D Views of Multiple Objects
TLDR
The approach called "projective generative adversarial networks" (PrGANs) trains a deep generative model of 3D shapes whose projections match the distributions of the input 2D views, which allows it to predict 3D, viewpoint, and generate novel views from an input image in a completely unsupervised manner.
Visual Object Networks: Image Generation with Disentangled 3D Representations
TLDR
A new generative model, Visual Object Networks (VONs), synthesizing natural images of objects with a disentangled 3D representation that enables many 3D operations such as changing the viewpoint of a generated image, shape and texture editing, linear interpolation in texture and shape space, and transferring appearance across different objects and viewpoints.
Learning Category-Specific Mesh Reconstruction from Image Collections
TLDR
A learning framework for recovering the 3D shape, camera, and texture of an object from a single image by incorporating texture inference as prediction of an image in a canonical appearance space and shows that semantic keypoints can be easily associated with the predicted shapes.
Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling
TLDR
A novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets, and a powerful 3D shape descriptor which has wide applications in 3D object recognition.
Synthesizing 3D Shapes via Modeling Multi-view Depth Maps and Silhouettes with Deep Generative Networks
TLDR
This work takes an alternative approach to the problem of learning generative models of 3D shapes: learning a generative model over multi-view depth maps or their corresponding silhouettes, and using a deterministic rendering function to produce3D shapes from these images.
3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction
TLDR
The 3D-R2N2 reconstruction framework outperforms the state-of-the-art methods for single view reconstruction, and enables the 3D reconstruction of objects in situations when traditional SFM/SLAM methods fail (because of lack of texture and/or wide baseline).
Learning Single-View 3D Reconstruction with Limited Pose Supervision
TLDR
A unified framework is presented that can combine both types of supervision: a small amount of camera pose annotations are used to enforce pose-invariance and view-point consistency, and unlabeled images combined with an adversarial loss are use to enforce the realism of rendered, generated models.
Weakly Supervised 3D Reconstruction with Adversarial Constraint
Supervised 3D reconstruction has witnessed a significant progress through the use of deep neural networks. However, this increase in performance requires large scale annotations of 2D/3D data. In
SilNet : Single- and Multi-View Reconstruction by Learning from Silhouettes
TLDR
A new deep-learning architecture and loss function, SilNet, that can handle multiple views in an order-agnostic manner is introduced that exceeds the state of the art on the ShapeNet benchmark dataset and is used to generate novel views of the sculpture dataset.
The shape variational autoencoder: A deep generative model of part‐segmented 3D objects
TLDR
Qualitatively it is demonstrated that the ShapeVAE produces plausible shape samples, and that it captures a semantically meaningful shape‐embedding, and it is shown that the model facilitates mesh reconstruction by sampling consistent surface normals.
...
1
2
3
4
5
...