Continual Learning of Object Instances

  title={Continual Learning of Object Instances},
  author={Kishan Parshotam and Mert Kilickaya},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
We propose continual instance learning - a method that applies the concept of continual learning to the task of distinguishing instances of the same object category. We specifically focus on the car object, and incrementally learn to distinguish car instances from each other with metric learning. We begin our paper by evaluating current techniques. Establishing that catastrophic forgetting is evident in existing methods, we then propose two remedies. Firstly, we regularise metric learning via… Expand


Incremental Object Learning From Contiguous Views
Through extensive empirical evaluation of state-of-the-art incremental learning algorithms, the novel empirical result that repetition can significantly ameliorate the effects of catastrophic forgetting is found. Expand
CORe50: a New Dataset and Benchmark for Continuous Object Recognition
This work proposes a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduces baseline approaches for different continuous learning scenarios. Expand
Learning without Forgetting
  • Zhizhong Li, Derek Hoiem
  • Computer Science, Mathematics
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2018
This work proposes the Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities, and performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques. Expand
Less-forgetting Learning in Deep Neural Networks
Surprisingly, the proposed less-forgetting learning method is very effective to forget less of the information in the source domain, and is helpful to improve the performance of deep neural networks in terms of recognition rates. Expand
A Simple Framework for Contrastive Learning of Visual Representations
It is shown that composition of data augmentations plays a critical role in defining effective predictive tasks, and introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. Expand
Improved Deep Metric Learning with Multi-class N-pair Loss Objective
This paper proposes a new metric learning objective called multi-class N-pair loss, which generalizes triplet loss by allowing joint comparison among more than one negative examples and reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples. Expand
An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks
It is found that it is always best to train using the dropout algorithm--the drop out algorithm is consistently best at adapting to the new task, remembering the old task, and has the best tradeoff curve between these two extremes. Expand
In Defense of the Triplet Loss for Person Re-Identification
It is shown that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin. Expand
Microsoft COCO: Common Objects in Context
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of sceneExpand
The Reasonable Effectiveness of Synthetic Visual Data
The recent successes in many visual recognition tasks, such as image classification, object detection, and semantic segmentation can be attributed in large part to three factors: (i) advances inExpand