Corpus ID: 229156107

Learning Category-level Shape Saliency via Deep Implicit Surface Networks

@article{Wu2020LearningCS,
  title={Learning Category-level Shape Saliency via Deep Implicit Surface Networks},
  author={Chaozheng Wu and Lin Sun and Xun Xu and Kui Jia},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.07290}
}
This paper is motivated from a fundamental curiosity on what defines a category of object shapes. For example, we may have the common knowledge that a plane has wings, and a chair has legs. Given the large shape variations among different instances of a same category, we are formally interested in developing a quantity defined for individual points on a continuous object surface; the quantity specifies how individual surface points contribute to the formation of the shape as the category. We… Expand

References

SHOWING 1-10 OF 38 REFERENCES
Learning Implicit Fields for Generative Shape Modeling
  • Zhiqin Chen, Hao Zhang
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
By replacing conventional decoders by the implicit decoder for representation learning and shape generation, this work demonstrates superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality. Expand
3D ShapeNets: A deep representation for volumetric shapes
TLDR
This work proposes to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network, and shows that this 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks. Expand
PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
TLDR
A hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set and proposes novel set learning layers to adaptively combine features from multiple scales to learn deep point set features efficiently and robustly. Expand
Tags2Parts: Discovering Semantic Regions from Shape Tags
TLDR
A novel method for discovering shape regions that strongly correlate with user-prescribed tags and can infer meaningful semantic regions, without ever observing shape segmentations is proposed. Expand
Multi-scale mesh saliency based on low-rank and sparse analysis in shape feature space
TLDR
By focusing on the sparse components, this paper develops a versatile, structure-sensitive saliency detection framework, which can distinguish local geometry saliency and global structure saliency in various 3D geometric models. Expand
DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation
TLDR
This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data. Expand
PointContrast: Unsupervised Pre-training for 3D Point Cloud Understanding
TLDR
This work aims at facilitating research on 3D representation learning by selecting a suite of diverse datasets and tasks to measure the effect of unsupervised pre-training on a large source set of 3D scenes and achieving improvement over recent best results in segmentation and detection across 6 different benchmarks. Expand
Learning Shape Priors for Single-View 3D Completion and Reconstruction
TLDR
The proposed ShapeHD pushes the limit of single-view shape completion and reconstruction by integrating deep generative models with adversarially learned shape priors, penalizing the model only if its output is unrealistic, not if it deviates from the ground truth. Expand
PointCloud Saliency Maps
TLDR
A novel way of characterizing critical points and segments to build point-cloud saliency maps is proposed, and each saliency score can be efficiently measured by the corresponding gradient of the loss w.r.t the point under the spherical coordinates. Expand
A Simple Framework for Contrastive Learning of Visual Representations
TLDR
It is shown that composition of data augmentations plays a critical role in defining effective predictive tasks, and introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. Expand
...
1
2
3
4
...