Corpus ID: 236447615

Co-Transport for Class-Incremental Learning

@article{Zhou2021CoTransportFC,
  title={Co-Transport for Class-Incremental Learning},
  author={Da-Wei Zhou and Han-Jia Ye and De-Chuan Zhan},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.12654}
}
Traditional learning systems are trained in closed-world for a fixed number of classes, and need pre-collected datasets in advance. However, new classes often emerge in real-world applications and should be learned incrementally. For example, in electronic commerce, new types of products appear daily, and in a social media community, new topics emerge frequently. Under such circumstances, incremental models should learn several new classes at a time without forgetting. We find a strong… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 69 REFERENCES
Semantic Drift Compensation for Class-Incremental Learning
TLDR
This work proposes a new method to estimate the drift, called semantic drift, of features and compensate for it without the need of any exemplars, and shows that the proposed SDC when combined with existing methods to prevent forgetting consistently improves results. Expand
Learning a Unified Classifier Incrementally via Rebalancing
TLDR
This work develops a new framework for incrementally learning a unified classifier, e.g. a classifier that treats both old and new classes uniformly, and incorporates three components, cosine normalization, less-forget constraint, and inter-class separation, to mitigate the adverse effects of the imbalance. Expand
Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning
TLDR
This work proposes a novel incremental prototype learning scheme which consists of a random episode selection strategy that adapts the feature representation to various generated incremental episodes to enhance the corresponding extensibility, and a self-promoted prototype refinement mechanism which strengthens the expression ability of the new classes by explicitly considering the dependencies among different classes. Expand
Few-Shot Incremental Learning with Continually Evolved Classifiers
TLDR
This paper adopted a simple but effective decoupled learning strategy of representations and classifiers that only the classifiers are updated in each incremental session, which avoids knowledge forgetting in the representations and proposes a Continually Evolved Classifier (CEC) that employs a graph model to propagate context information between classifiers for adaptation. Expand
Prototype Augmentation and Self-Supervision for Incremental Learning
TLDR
A simple nonexemplar based method, named PASS, to address the catastrophic forgetting problem in incremental learning by memorizing one class-representative prototype for each old class and adopting prototype augmentation in the deep feature space to maintain the decision boundary of previous tasks. Expand
Few-Shot Class-Incremental Learning
TLDR
This paper proposes the TOpology-Preserving knowledge InCrementer (TOPIC) framework, which mitigates the forgetting of the old classes by stabilizing NG's topology and improves the representation learning for few-shot new classes by growing and adapting NG to new training samples. Expand
Large Scale Incremental Learning
  • Yue Wu, Yinpeng Chen, +4 authors Y. Fu
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This work found that the last fully connected layer has a strong bias towards the new classes, and this bias can be corrected by a linear model, and with two bias parameters, this method performs remarkably well on two large datasets. Expand
IL2M: Class Incremental Learning With Dual Memory
TLDR
This paper presents a class incremental learning method which exploits fine tuning and a dual memory to reduce the negative effect of catastrophic forgetting in image recognition and shows that the proposed approach is more effective than a range of competitive state-of-the-art methods. Expand
Open-world Learning and Application to Product Classification
TLDR
A new OWL method based on meta-learning that maintains only a dynamic set of seen classes that allows new classes to be added or deleted with no need for model re-training. Expand
Multi-Instance Learning With Emerging Novel Class
TLDR
This paper focuses on the Multi-Instance learning with Emerging Novel class (MIEN) problem, and formulates MIEN from a metric learning perspective, and proposes the MIEN-metric method, comparable with state-of-the-art MIL algorithms for binary classification in the traditional MIL setting. Expand
...
1
2
3
4
5
...