LassoNet: Deep Lasso-Selection of 3D Point Clouds

@article{Chen2020LassoNetDL,
  title={LassoNet: Deep Lasso-Selection of 3D Point Clouds},
  author={Zhutian Chen and Wei Zeng and Zhiguang Yang and Lingyun Yu and Chi-Wing Fu and Huamin Qu},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2020},
  volume={26},
  pages={195-204}
}
Selection is a fundamental task in exploratory analysis and visualization of 3D point clouds. Prior researches on selection methods were developed mainly based on heuristics such as local point density, thus limiting their applicability in general data. Specific challenges root in the great variabilities implied by point clouds (e.g., dense vs. sparse), viewpoint (e.g., occluded vs. non-occluded), and lasso (e.g., small vs. large). In this work, we introduce LassoNet, a new deep neural network… 

Point Cloud Upsampling via Disentangled Refinement

TLDR
This work proposes to disentangle the task based on its multi-objective nature and formulate two cascaded sub-networks, a dense generator and a spatial refiner, and designs a pair of local and global refinement units in the spatial refiners to evolve a coarse feature map.

PU-Flow: a Point Cloud Upsampling Networkwith Normalizing Flows

TLDR
A novel deep learning-based model, called PU-Flow, which incorporates normalizing flows and weight prediction techniques to produce dense points uniformly distributed on the underlying surface, and outperforms state-of-the-art methods in terms of reconstruction quality, proximity-to-surface accuracy, and computation efficiency.

Meta-PU: An Arbitrary-Scale Upsampling Network for Point Cloud

TLDR
A novel method called “Meta-PU” is proposed to first support point cloud upsampling of arbitrary scale factors with a single model and even outperforms the existing methods trained for a specific scale factor only.

Investigate Indistinguishable Points in Semantic Segmentation of 3D Point Cloud

TLDR
A novel Indistinguishable Area Focalization Network (IAF-Net), which select indistinguishable points adaptively by utilizing the hierarchical semantic features and enhance fine-grained features for points especially those indistinguishable points, achieves the state-of-the-art performance on several popular 3D point datasets.

PointAugment: An Auto-Augmentation Framework for Point Cloud Classification

TLDR
PointAugment is sample-aware and takes an adversarial learning strategy to jointly optimize an augmentor network and a classifier network, such that the augmentor can learn to produce augmented samples that best fit the classifier.

Local Latent Representation based on Geometric Convolution for Particle Data Feature Exploration.

TLDR
Geometric Convolution, a neural network building block designed for 3D point clouds, is adopted to create latent representations for scientific particle data that capture both the particle positions and their physical attributes in the local neighborhood so that features can be extracted by clustering in the latent space, and tracked by applying tracking algorithms such as mean-shift.

Point Set Self-Embedding

TLDR
This work presents an innovative method for point set self-embedding, that encodes the structural information of a dense point set into its sparser version in a visual but imperceptible form, and can leverage the embedded information to fully restore the original point set for detailed analysis on remote servers.

Sketch-Based Fast and Accurate Querying of Time Series Using Parameter-Sharing LSTM Networks

TLDR
This article introduces a machine learning based solution for fast and accurate querying of time series data based on a swift sketching interaction that builds on existing LSTM technology (long short-term memory) to encode both the sketch and the time seriesData in a network with shared parameters.

Time Varying Particle Data Feature Extraction and Tracking with Neural Networks

TLDR
A deep learning approach is taken to create feature representations for scientific particle data to assist feature extraction and tracking using a deep learning model which produces latent vectors to represent the relation between spatial locations and physical attributes in a local neighborhood.

Deep Colormap Extraction from Visualizations

TLDR
This work presents a new approach based on deep learning to automatically extract colormaps from visualizations by passing the histogram of colors in an input visualization image to a pre-trained deep neural network, which learns to predict the colormap that produces the visualization.

References

SHOWING 1-10 OF 46 REFERENCES

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

TLDR
This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.

PointCNN: Convolution On X-Transformed Points

TLDR
This work proposes to learn an Χ-transformation from the input points to simultaneously promote two causes: the first is the weighting of the input features associated with the points, and the second is the permutation of the points into a latent and potentially canonical order.

PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space

TLDR
A hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set and proposes novel set learning layers to adaptively combine features from multiple scales to learn deep point set features efficiently and robustly.

Recurrent Slice Networks for 3D Segmentation of Point Clouds

TLDR
This work presents a novel 3D segmentation framework, RSNet1, to efficiently model local structures in point clouds using a combination of a novel slice pooling layer, Recurrent Neural Network layers, and a slice unpooling layer.

Efficient Structure-Aware Selection Techniques for 3D Point Cloud Visualizations with 2DOF Input

TLDR
Two new techniques are presented, TeddySelection and CloudLasso, that support the selection of subsets in large particle 3D datasets in an interactive and visually intuitive manner and reduce the need for complex multi-step selection processes involving Boolean operations.

Monte Carlo convolution for learning on non-uniformly sampled point clouds

TLDR
By employing the proposed method in hierarchical network architectures it can outperform most of the state-of-the-art networks on established point cloud segmentation, classification and normal estimation benchmarks and demonstrate the robustness of the method with respect to sampling variations, even when training with uniformly sampled data only.

3D Recurrent Neural Networks with Context Fusion for Point Cloud Semantic Segmentation

TLDR
A novel end-to-end approach for unstructured point cloud semantic segmentation, named 3P-RNN, is proposed to exploit the inherent contextual features of 3D point clouds to demonstrate robust performance superior to state-of-the-arts.

3D ShapeNets: A deep representation for volumetric shapes

TLDR
This work proposes to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network, and shows that this 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.

Volumetric and Multi-view CNNs for Object Classification on 3D Data

TLDR
This paper introduces two distinct network architectures of volumetric CNNs and examines multi-view CNNs, providing a better understanding of the space of methods available for object classification on 3D data.

Fast and Accurate CNN‐based Brushing in Scatterplots

TLDR
This paper presents a new solution for a near‐perfect sketch‐based brushing technique, where a convolutional neural network is exploited for estimating the intended data selection from a fast and simple click‐and‐drag interaction and from the data distribution in the visualization.