Learning without Forgetting for 3D Point Cloud Objects

@inproceedings{Chowdhury2021LearningWF,
  title={Learning without Forgetting for 3D Point Cloud Objects},
  author={Townim Chowdhury and Mahira Jalisha and Ali Cheraghian and Shafin Rahman},
  booktitle={IWANN},
  year={2021}
}
When we fine-tune a well-trained deep learning model for a new set of classes, the network learns new concepts but gradually forgets the knowledge of old training. In some real-life applications, we may be interested in learning new classes without forgetting the capability of previous experience. Such learning without forgetting problem is often investigated using 2D image recognition tasks. In this paper, considering the growth of depth camera technology, we address the same problem for the… 

Continual learning on 3D point clouds with random compressed rehearsal

TLDR
This work proposes a novel neural network architecture capable of continual learning on 3D point cloud data that utilizes point cloud structure properties for preserving a heavily compressed set of past data.

Rethinking Task-Incremental Learning Baselines

TLDR
This study presents a simple yet effective adjustment network (SAN) for task incremental learning that achieves near state-of-the-art performance while using minimal architectural size with- out using memory instances compared to previous state of theart approaches.

References

SHOWING 1-10 OF 42 REFERENCES

Zero-Shot Learning on 3D Point Cloud Objects and Beyond

TLDR
A novel loss function is developed that simultaneously aligns seen semantics with point cloud features and takes advantage of unlabeled test data to address some known issues (e.g., the problems of domain adaptation, hubness, and data bias).

Learning without Forgetting

TLDR
This work proposes the Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities, and performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques.

Deep Learning for 3D Point Clouds: A Survey

TLDR
This paper presents a comprehensive review of recent progress in deep learning methods for point clouds, covering three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation.

Semantic-aware Knowledge Distillation for Few-Shot Class-Incremental Learning

TLDR
A distillation algorithm is introduced to address the problem of FSCIL and a method based on an attention mechanism on multiple parallel embeddings of visual data to align visual and semantic vectors, which reduces issues related to catastrophic forgetting.

PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space

TLDR
A hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set and proposes novel set learning layers to adaptively combine features from multiple scales to learn deep point set features efficiently and robustly.

Revisiting Point Cloud Classification: A New Benchmark Dataset and Classification Model on Real-World Data

TLDR
This paper introduces ScanObjectNN, a new real-world point cloud object dataset based on scanned indoor scene data, and proposes new point cloud classification neural networks that achieve state-of-the-art performance on classifying objects with cluttered background.

Class-incremental Learning via Deep Model Consolidation

TLDR
A class-incremental learning paradigm called Deep Model Consolidation (DMC), which works well even when the original training data is not available, and demonstrates significantly better performance in image classification and object detection in the single-headed IL setting.

Memory Replay GANs: learning to generate images from new categories without forgetting

TLDR
This paper proposes Memory Replay GANs (MeRGANs), a conditional GAN framework that integrates a memory replay generator and studies two methods to prevent forgetting by leveraging these replays, namely joint training with replay and replay alignment.

End-to-End Incremental Learning

TLDR
This work proposes an approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes, based on a loss composed of a distillation measure to retain the knowledge acquired from theold classes, and a cross-entropy loss to learn the new classes.

PointCNN: Convolution On X-Transformed Points

TLDR
This work proposes to learn an Χ-transformation from the input points to simultaneously promote two causes: the first is the weighting of the input features associated with the points, and the second is the permutation of the points into a latent and potentially canonical order.