Unsupervised Learning of Intrinsic Structural Representation Points

@article{Chen2020UnsupervisedLO,
  title={Unsupervised Learning of Intrinsic Structural Representation Points},
  author={Nenglun Chen and Lingjie Liu and Zhiming Cui and Runnan Chen and Duygu Ceylan and Changhe Tu and Wenping Wang},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={9118-9127}
}
Learning structures of 3D shapes is a fundamental problem in the field of computer graphics and geometry processing. We present a simple yet interpretable unsupervised method for learning a new structural representation in the form of 3D structure points. The 3D structure points produced by our method encode the shape structure intrinsically and exhibit semantic consistency across all the shape instances with similar structures. This is a challenging goal that has not fully been achieved by… 
Learning Implicit Functions for Topology-Varying Dense 3D Shape Correspondence
TLDR
This paper implements dense correspondence through an inverse function mapping from the part embedding to a corresponded 3D point, which is assumed to be similar to its densely corresponded point in another 3D shape of the same object category.
Learning 3D Dense Correspondence via Canonical Point Autoencoder
TLDR
A canonical point autoencoder that predicts dense correspondences between 3D shapes of the same category that does not require any form of annotation or selfsupervised part segmentation network and can handle unaligned input point clouds within a certain rotation range.
Unsupervised Learning of 3D Semantic Keypoints with Mutual Reconstruction
TLDR
The proposed method is the first to mine 3D semantic consistent keypoints from a mutual reconstruction view and predicts keypoints that not only reconstruct the object itself but also reconstruct other instances in the same category.
KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control
TLDR
This work introduces KeypointDeformer, a novel unsupervised method for shape control through automatically discovered 3D keypoints that is readily deployed to new object categories without requiring annotations for 3DKeypoint keypoints and deformations.
SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation
  • Kai Chen, Q. Dou
  • Computer Science
    2021 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2021
TLDR
This paper innovates a structure-guided prior adaptation scheme to accurately estimate 6D pose for individual objects by leveraging their structure similarity to dynamically adapt the prior to the observed object.
Appendix: Unsupervised 3D Keypoint Discovery for Shape Control
Figure 1: Farthest Point Keypoint regularizer ablation. We investigate the influence of the number J of sampled farthest points q used for the keypoint regularizer (Section 3.2 of the main paper) on
Towards 3D Scene Understanding by Referring Synthetic Models
TLDR
This paper explores how synthetic models alleviate the real scene annotation burden, i.e., taking the labelled 3D synthetic models as reference for supervision, the neural network aims to recognize specific categories of objects on a real scene scan (without scene annotation for supervision).
RPG: Learning Recursive Point Cloud Generation
TLDR
A novel point cloud generator that is able to reconstruct and generate 3D point clouds composed of semantic parts that has comparable or even superior performance on both generation and reconstruction tasks in comparison to various baselines.
UKPGAN: A General Self-Supervised Keypoint Detector
TLDR
UKPGAN is proposed, a general self-supervised 3D keypoint detector where keypoints are detected so that they could reconstruct the original object shape, and two modules: GAN-based keypoint sparsity control and salient information distillation modules are proposed to locate those important keypoints.
...
1
2
...

References

SHOWING 1-10 OF 53 REFERENCES
EdgeNet: Deep metric learning for 3D shapes
Deep Functional Dictionaries: Learning Consistent Semantic Structures on 3D Models from Functions
TLDR
Even though the authors' shapes have independent discretizations and no functional correspondences are provided, the network is able to generate latent bases, in a consistent order, that reflect the shared semantic structure among the shapes.
Learning Local Shape Descriptors from Part Correspondences with Multiview Convolutional Networks
TLDR
A new local descriptor for 3D shapes is presented, directly applicable to a wide range of shape analysis problems such as point correspondences, semantic segmentation, affordance prediction, and shape-to-scan matching by a convolutional network trained to embed geometrically and semantically similar points close to one another in descriptor space.
3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions
TLDR
3DMatch is presented, a data-driven model that learns a local volumetric patch descriptor for establishing correspondences between partial 3D data that consistently outperforms other state-of-the-art approaches by a significant margin.
Learning elementary structures for 3D shape generation and matching
TLDR
This work presents two complementary approaches for learning elementary structures: (i) patch deformation learning and (ii) point translation learning, which can be extended to abstract structures of higher dimensions for improved results.
A Point Set Generation Network for 3D Object Reconstruction from a Single Image
TLDR
This paper addresses the problem of 3D reconstruction from a single image, generating a straight-forward form of output unorthordox, and designs architecture, loss function and learning paradigm that are novel and effective, capable of predicting multiple plausible 3D point clouds from an input image.
Learning part-based templates from large collections of 3D shapes
TLDR
An automatic algorithm is proposed that starts with an initial template model and then jointly optimizes for part segmentation, point-to-point surface correspondence, and a compact deformation model to best explain the input model collection.
TopNet: Structural Point Cloud Decoder
TLDR
This work proposes a novel decoder that generates a structured point cloud without assuming any specific structure or topology on the underlying point set, and significantly outperforms state-of-the-art 3D point cloud completion methods on the Shapenet dataset.
Supervised Fitting of Geometric Primitives to 3D Point Clouds
TLDR
This work introduces Supervised Primitive Fitting Network (SPFN), an end-to-end neural network that can robustly detect a varying number of primitives at different scales without any user control and evaluates the approach on a novel benchmark of ANSI 3D mechanical component models.
Shape Unicode: A Unified Shape Representation
TLDR
This work proposes a unified code for 3D shapes, dubbed Shape Unicode, that imbibes shape cues across these representations into a single code, and a novel framework to learn such a code space for any 3D shape dataset.
...
1
2
3
4
5
...