Corpus ID: 195766986

Going Deeper with Point Networks

@article{Le2019GoingDW,
  title={Going Deeper with Point Networks},
  author={Eric-Tuan Le and Iasonas Kokkinos and Niloy Jyoti Mitra},
  journal={ArXiv},
  year={2019},
  volume={abs/1907.00960}
}
In this work, we introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of state-of-the-art networks thus allowing to design deeper and more accurate networks. [...] Key Result We report a 3.4% increase in IoU on the -most complex- PartNet dataset while decreasing memory footprint by 57%.Expand
Multi-Scale Dynamic Graph Convolution Network for Point Clouds Classification
TLDR
A Multi-scale Dynamic GCN model is proposed for point clouds classification, a Farthest Point Sampling method is applied in this model firstly to efficiently cover the entire point set, and the model achieves a better performance on classification accuracy and model complexity than other state-of-the-art models. Expand
Cross-Shape Graph Convolutional Networks
TLDR
The results show significantly improved performance for 3D point cloud semantic segmentation compared to conventional approaches, especially in cases with the limited number of training examples. Expand
Deep Learning on Point Clouds and Its Application: A Survey
TLDR
The recent existing point cloud feature learning methods are classified as point-based and tree-based, which first employs a k-dimensional tree structure to represent the point cloud with a regular representation and then feeds these representations into deep learning models. Expand
Learning Part Boundaries from 3D Point Clouds
TLDR
A method that detects boundaries of parts in 3D shapes represented as point clouds based on a graph convolutional network architecture that outputs a probability for a point to lie in an area that separates two or more parts in a 3D shape is presented. Expand

References

SHOWING 1-10 OF 45 REFERENCES
SPLATNet: Sparse Lattice Networks for Point Cloud Processing
  • Hang Su, V. Jampani, +4 authors J. Kautz
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
TLDR
A network architecture for processing point clouds that directly operates on a collection of points represented as a sparse set of samples in a high-dimensional lattice that outperforms existing state-of-the-art techniques on 3D segmentation tasks. Expand
EC-Net: an Edge-aware Point set Consolidation Network
TLDR
This paper presents the first deep learning based edge-aware technique to facilitate the consolidation of point clouds, and trains the network to process points grouped in local patches, and train it to learn and help consolidate points, deliberately for edges. Expand
Dynamic Graph CNN for Learning on Point Clouds
TLDR
This work proposes a new neural network module suitable for CNN-based high-level tasks on point clouds, including classification and segmentation called EdgeConv, which acts on graphs dynamically computed in each layer of the network. Expand
PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
TLDR
A hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set and proposes novel set learning layers to adaptively combine features from multiple scales to learn deep point set features efficiently and robustly. Expand
PU-Net: Point Cloud Upsampling Network
TLDR
A data-driven point cloud upsampling technique to learn multi-level features per point and expand the point set via a multi-branch convolution unit implicitly in feature space, which shows that its upsampled points have better uniformity and are located closer to the underlying surfaces. Expand
Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation
TLDR
A new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes is described. Expand
Wide Residual Networks
TLDR
This paper conducts a detailed experimental study on the architecture of ResNet blocks and proposes a novel architecture where the depth and width of residual networks are decreased and the resulting network structures are called wide residual networks (WRNs), which are far superior over their commonly used thin and very deep counterparts. Expand
Aggregated Residual Transformations for Deep Neural Networks
TLDR
On the ImageNet-1K dataset, it is empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy and is more effective than going deeper or wider when the authors increase the capacity. Expand
Training Deep Nets with Sublinear Memory Cost
TLDR
This work designs an algorithm that costs O( √ n) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch, and shows that it is possible to trade computation for memory giving a more memory efficient training algorithm with a little extra computation cost. Expand
Multiresolution Tree Networks for 3D Point Cloud Processing
TLDR
This model represents a 3D shape as a set of locality-preserving 1D ordered list of points at multiple resolutions, which allows efficient feed-forward processing through 1D convolutions, coarse-to-fine analysis through a multi-grid architecture, and it leads to faster convergence and small memory footprint during training. Expand
...
1
2
3
4
5
...