Learning to Generate 3D Shapes from a Single Example

  title={Learning to Generate 3D Shapes from a Single Example},
  author={Rundi Wu and Changxi Zheng},
  journal={ACM Transactions on Graphics (TOG)},
  pages={1 - 19}
Existing generative models for 3D shapes are typically trained on a large 3D dataset, often of a specific object category. In this paper, we investigate the deep generative model that learns from only a single reference 3D shape. Specifically, we present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales. To avoid large memory and computational cost induced by operating on the 3D volume, we build our generator atop the tri… 

SinGRAF: Learning a 3D Generative Radiance Field for a Single Scene

This work introduces SinGRAF, a 3D-aware generative model that is trained with a few input images of a single scene that outperform the closest related works in both quality and diversity by a large margin.

Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and Text-to-Image Diffusion Models

This paper makes the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods, and presents a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model.

SinDDM: A Single Image Denoising Diffusion Model

This work introduces a framework for training a DDM on a single image, which learns the internal statistics of the training image by using a multi-scale diffusion process, and uses a fully-convolutional light-weight denoiser to drive the reverse diffusion process.



Multi-chart generative surface modeling

A 3D shape generative model based on deep neural networks that learns the shape distribution and is able to generate novel shapes, interpolate shapes, and explore the generated shape space for human body and bone (teeth) shape generation is introduced.

DECOR-GAN: 3D Shape Detailization by Conditional Refinement

This work introduces a deep generative network for 3D shape detailization, akin to stylization with the style being geometric details, and demonstrates that this method can refine a coarse shape into a variety of detailed shapes with different styles.

DeepCAD: A Deep Generative Network for Computer-Aided Design Models

This work presents the first 3D generative model for a drastically different shape representation— describing a shape as a sequence of computer-aided design (CAD) operations, and proposes a CAD generative network based on the Transformer.

SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation

SP-GAN not only enables the generation of diverse and realistic shapes as point clouds with fine details but also embeds a dense correspondence across the generated shapes, thus facilitating part-wise interpolation between user-selected local parts in thegenerated shapes.

StructureNet: Hierarchical Graph Networks for 3D Shape Generation

StructureNet is introduced, a hierarchical graph network which can directly encode shapes represented as such n-ary graphs, and can be robustly trained on large and complex shape families and used to generate a great diversity of realistic structured shape geometries.

Learning Generative Models of Textured 3D Meshes from Real-World Images

This work proposes a GAN framework for generating textured triangle meshes without relying on ground-truth keypoints, and demonstrates the generality of the method by setting new baselines on a larger set of categories from ImageNet–for which keypoints are not available–without any class-specific hyperparameter tuning.

Learning Implicit Fields for Generative Shape Modeling

  • Zhiqin ChenHao Zhang
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
By replacing conventional decoders by the implicit decoder for representation learning and shape generation, this work demonstrates superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality.

Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

A novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets, and a powerful 3D shape descriptor which has wide applications in 3D object recognition.

Learning Representations and Generative Models for 3D Point Clouds

A deep AutoEncoder network with state-of-the-art reconstruction quality and generalization ability is introduced with results that outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations.

GRASS: Generative Recursive Autoencoders for Shape Structures

A novel neural network architecture for encoding and synthesis of 3D shapes, particularly their structures, is introduced and it is demonstrated that without supervision, the network learns meaningful structural hierarchies adhering to perceptual grouping principles, produces compact codes which enable applications such as shape classification and partial matching, and supports shape synthesis and interpolation with significant variations in topology and geometry.