• Corpus ID: 220793178

MRGAN: Multi-Rooted 3D Shape Generation with Unsupervised Part Disentanglement

@article{Gal2020MRGANM3,
  title={MRGAN: Multi-Rooted 3D Shape Generation with Unsupervised Part Disentanglement},
  author={Rinon Gal and Amit H. Bermano and Hao Zhang and Daniel Cohen-Or},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.12944}
}
We present MRGAN, a multi-rooted adversarial network which generates part-disentangled 3D point-cloud shapes without part-based shape supervision. The network fuses multiple branches of tree-structured graph convolution layers which produce point clouds, with learnable constant inputs at the tree roots. Each branch learns to grow a different shape part, offering control over the shape generation at the part level. Our network encourages disentangled generation of semantic parts via two key… 

Figures and Tables from this paper

EditVAE: Unsupervised Part-Aware Controllable 3D Point Cloud Shape Generation
TLDR
A latent representation of the point cloud which can be decomposed into a disentangled representation for each part of the shape and the inductive bias introduced by the joint modeling approach yields state-of-the-art experimental results on the ShapeNet dataset.
StyleFusion: A Generative Model for Disentangling Spatial Segments
TLDR
StyleFusion is presented, a new mapping architecture for StyleGAN, which takes as input a number of latent codes and fuses them into a single style code, providing fine-grained control over each region of the generated image.
StyleFusion: Disentangling Spatial Segments in StyleGAN-Generated Images
TLDR
StyleFusion, a new mapping architecture for StyleGAN, which takes as input a number of latent codes and fuses them into a single style code, results in a single harmonized image in which each semantic region is controlled by one of the input latent codes.
SPAGHETTI: Editing Implicit Shapes Through Part Aware Generation
TLDR
The architecture allows for manipulation of implicit shapes by means of transforming, interpolating and combining shape segments together, without requiring explicit part supervision, which enables a generative framework with part-level control.
SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation
TLDR
SP-GAN not only enables the generation of diverse and realistic shapes as point clouds with fine details but also embeds a dense correspondence across the generated shapes, thus facilitating part-wise interpolation between user-selected local parts in thegenerated shapes.
Z2P: Instant Rendering of Point Clouds
TLDR
This work presents a technique for rendering point clouds using a neural network, and demonstrates that the framework produces a plausible image, and can effectively handle noise, non-uniform sampling, thin surfaces / sheets, and is fast.
GeoPointGAN: Synthetic Spatial Data with Local Label Differential Privacy
TLDR
This work introduces GeoPointGAN, a novel GAN-based solution for generating synthetic spatial point datasets with high utility and strong individual level privacy guarantees, and demonstrates that a strong level of privacy is achieved with little-to-no adverse utility cost.
Z2P: Instant Visualization of Point Clouds
TLDR
This work designs a neural network that translates point depth‐map directly into an image, where the point cloud is visualized as though a surface was reconstructed from it, and shows the resulting appearance of the visualized point cloud can be, optionally, conditioned on simple control variables.
Latent Partition Implicit with Surface Codes for 3D Representation
TLDR
The insight here is that both the part learning and the part blending can be conducted much easier in the latent space than in the spatial space, which means that LPI outperforms the latest methods under the widely used benchmarks in terms of reconstruction accuracy and modeling interpretability.
SP-GAN
TLDR
SP-GAN is a new unsupervised sphere-guided generative model for direct synthesis of 3D shapes in the form of point clouds that incorporates a global prior to spatially guide the generative process and attaches a local prior to each sphere point to provide local details.

References

SHOWING 1-10 OF 35 REFERENCES
Global-to-local generative model for 3D shapes
TLDR
It is demonstrated that the global-to-local generative model produces significantly better results than a plain three-dimensional GAN, in terms of both their shape variety and the distribution with respect to the training data.
GRASS: Generative Recursive Autoencoders for Shape Structures
TLDR
A novel neural network architecture for encoding and synthesis of 3D shapes, particularly their structures, is introduced and it is demonstrated that without supervision, the network learns meaningful structural hierarchies adhering to perceptual grouping principles, produces compact codes which enable applications such as shape classification and partial matching, and supports shape synthesis and interpolation with significant variations in topology and geometry.
Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling
TLDR
A novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets, and a powerful 3D shape descriptor which has wide applications in 3D object recognition.
StructureNet: Hierarchical Graph Networks for 3D Shape Generation
TLDR
StructureNet is introduced, a hierarchical graph network which can directly encode shapes represented as such n-ary graphs, and can be robustly trained on large and complex shape families and used to generate a great diversity of realistic structured shape geometries.
Composite Shape Modeling via Latent Space Factorization
TLDR
This work proposes to model shape assembly using an explicit learned part deformation module, which utilizes a 3D spatial transformer network to perform an in-network volumetric grid deformation, and which allows the whole system end-to-end to perform part-level shape manipulation, unattainable by existing approaches.
LOGAN: Unpaired Shape Transform in Latent Overcomplete Space
TLDR
LOGAN, a deep neural network aimed at learning general-purpose shape transforms from unpaired domains, is introduced and is shown to be able to learn what shape features to preserve during shape translation, either local or non-local, whether content or style, depending solely on the input domains for training.
3D Point Cloud Generative Adversarial Network Based on Tree Structured Graph Convolutions
TLDR
Experimental results demonstrate that the proposed tree-GAN outperforms state-of-the-art GANs in terms of both conventional metrics and FPD, and can generate point clouds for different semantic parts without prior knowledge.
BAE-NET: Branched Autoencoder for Shape Co-Segmentation
TLDR
BAE-NET, a branched autoencoder network, is introduced for shape co-segmentation, demonstrating that using only a couple of exemplars, this network can generally outperform state-of-the-art supervised methods trained on hundreds of segmented shapes.
PointFlow: 3D Point Cloud Generation With Continuous Normalizing Flows
TLDR
A principled probabilistic framework to generate 3D point clouds by modeling them as a distribution of distributions with the invertibility of normalizing flows enables the computation of the likelihood during training and allows the model to train in the variational inference framework.
HoloGAN: Unsupervised Learning of 3D Representations From Natural Images
TLDR
HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner and is shown to be able to generate images with similar or higher visual quality than other generative models.
...
...