• Corpus ID: 238744083

EditVAE: Unsupervised Part-Aware Controllable 3D Point Cloud Shape Generation

@article{Li2021EditVAEUP,
  title={EditVAE: Unsupervised Part-Aware Controllable 3D Point Cloud Shape Generation},
  author={Shidi Li and Miaomiao Liu and Christian J. Walder},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.06679}
}
This paper tackles the problem of parts-aware point cloud generation. Unlike existing works which require the point cloud to be segmented into parts a priori, our parts-aware edit- ing and generation are performed in an unsupervised manner. We achieve this with a simple modification of the Variational Auto-Encoder which yields a joint model of the point cloud itself along with a schematic representation of it as a combi- nation of shape primitives. In particular, we introduce a latent… 
SPA-VAE: Similar-Parts-Assignment for Unsupervised 3D Point Cloud Generation
TLDR
This paper addresses the problem of unsupervised partsaware point cloud generation with learned parts-based self-similarity with the training of SPA-VAE, a variational Bayesian approach which uses the Gumbel-softmax trick for the shared part assignments, along with various novel losses to provide appropriate inductive biases.

References

SHOWING 1-10 OF 45 REFERENCES
PointFlow: 3D Point Cloud Generation With Continuous Normalizing Flows
TLDR
A principled probabilistic framework to generate 3D point clouds by modeling them as a distribution of distributions with the invertibility of normalizing flows enables the computation of the likelihood during training and allows the model to train in the variational inference framework.
MRGAN: Multi-Rooted 3D Shape Generation with Unsupervised Part Disentanglement
We present MRGAN, a multi-rooted adversarial network which generates part-disentangled 3D point-cloud shapes without part-based shape supervision. The network fuses multiple branches of
Unsupervised learning for cuboid shape abstraction via joint segmentation from point clouds
TLDR
This paper proposes an unsupervised shape abstraction method to map a point cloud into a compact cuboid representation and designs four novel losses to jointly supervise the two branches in terms of geometric similarity and cuboid compactness.
Learning Representations and Generative Models for 3D Point Clouds
TLDR
A deep AutoEncoder network with state-of-the-art reconstruction quality and generalization ability is introduced with results that outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations.
The shape variational autoencoder: A deep generative model of part‐segmented 3D objects
TLDR
Qualitatively it is demonstrated that the ShapeVAE produces plausible shape samples, and that it captures a semantically meaningful shape‐embedding, and it is shown that the model facilitates mesh reconstruction by sampling consistent surface normals.
Learning Localized Generative Models for 3D Point Clouds via Graph Convolution
TLDR
This paper focuses on the generator of a GAN and defines methods for graph convolution when the graph is not known in advance as it is the very output of the generator, such that it learns to exploit a self-similarity prior on the data distribution to sample more effectively.
Composite Shape Modeling via Latent Space Factorization
TLDR
This work proposes to model shape assembly using an explicit learned part deformation module, which utilizes a 3D spatial transformer network to perform an in-network volumetric grid deformation, and which allows the whole system end-to-end to perform part-level shape manipulation, unattainable by existing approaches.
StructureNet: Hierarchical Graph Networks for 3D Shape Generation
TLDR
StructureNet is introduced, a hierarchical graph network which can directly encode shapes represented as such n-ary graphs, and can be robustly trained on large and complex shape families and used to generate a great diversity of realistic structured shape geometries.
Progressive Point Cloud Deconvolution Generation Network
TLDR
An effective point cloud generation method that can generate multi-resolution point clouds of the same shape from a latent vector is proposed and a shape-preserving adversarial loss is proposed to train the point cloud deconvolution generation network.
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
TLDR
This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.
...
1
2
3
4
5
...