• Corpus ID: 239998258

International Workshop on Continual Semi-Supervised Learning: Introduction, Benchmarks and Baselines

@article{Shahbaz2021InternationalWO,
  title={International Workshop on Continual Semi-Supervised Learning: Introduction, Benchmarks and Baselines},
  author={Ajmal Shahbaz and Salman Khan and Mohammad Asiful Hossain and Vincenzo Lomonaco and Kevin J. Cannons and Zhan Xu and Fabio Cuzzolin},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.14613}
}
The aim of this paper is to formalise a new continual semi-supervised learning (CSSL) paradigm, proposed to the attention of the machine learning community via the IJCAI 2021 International Workshop on Continual Semi-Supervised Learning (CSSL@IJCAI)1, with the aim of raising the field’s awareness about this problem and mobilising its effort in this direction. After a formal definition of continual semi-supervised learning and the appropriate training and testing protocols, the paper introduces… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 26 REFERENCES
GDumb: A Simple Approach that Questions Our Progress in Continual Learning
We discuss a general formulation for the Continual Learning (CL) problem for classification—a learning task where a stream provides samples to a learner and the goal of the learner, depending on the
Continual learning: A comparative study on how to defy forgetting in classification tasks
TLDR
This work focuses on task-incremental classification, where tasks arrive in a batch-like fashion, and are delineated by clear boundaries, and studies the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, to compare methods in terms of required memory, computation time and storage.
Three scenarios for continual learning
TLDR
Three continual learning scenarios are described based on whether at test time task identity is provided and--in case it is not--whether it must be inferred, and it is found that regularization-based approaches fail and that replaying representations of previous experiences seems required for solving this scenario.
Self-labeled techniques for semi-supervised learning: taxonomy, software and empirical study
TLDR
This paper provides a survey of self-labeled methods for semi-supervised classification and proposes a taxonomy based on the main characteristics presented in them, aiming to measure their performance in terms of transductive and inductive classification capabilities.
Gradient based sample selection for online continual learning
TLDR
This work formulation of sample selection as a constraint reduction problem based on the constrained optimization view of continual learning shows that it is equivalent to maximizing the diversity of samples in the replay buffer with parameters gradient as the feature.
Semi-Supervised Self-Training of Object Detection Models
TLDR
The key contributions of this empirical study are to demonstrate that a model trained in this manner can achieve results comparable to a modeltrained in the traditional manner using a much larger set of fully labeled data, and that a training data selection metric that is defined independently of the detector greatly outperforms a selection metric based on the detection confidence generated by the detector.
CORe50: a New Dataset and Benchmark for Continuous Object Recognition
TLDR
This work proposes a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduces baseline approaches for different continuous learning scenarios.
Progressive Feature Alignment for Unsupervised Domain Adaptation
TLDR
The Progressive Feature Alignment Network (PFAN) is proposed to align the discriminative features across domains progressively and effectively, via exploiting the intra-class variation in the target domain.
Crowd Counting and Density Estimation by Trellis Encoder-Decoder Networks
TLDR
This paper proposes a trellis encoder-decoder network (TEDnet) for crowd counting that achieves the best overall performance in terms of both density map quality and counting accuracy, and proposes a new combinatorial loss to enforce similarities in local coherence and spatial correlation between maps.
Domain Adaptive Faster R-CNN for Object Detection in the Wild
TLDR
This work builds on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy, based on $$-divergence theory.
...
1
2
3
...