Boosting Supervised Learning Performance with Co-training
@article{Du2021BoostingSL, title={Boosting Supervised Learning Performance with Co-training}, author={Xinnan Du and William Zhang and Jos{\'e} Manuel {\'A}lvarez}, journal={2021 IEEE Intelligent Vehicles Symposium (IV)}, year={2021}, pages={540-545} }
Deep learning perception models require a massive amount of labeled training data to achieve good performance. While unlabeled data is easy to acquire, the cost of labeling is prohibitive and could create a tremendous burden on companies or individuals. Recently, self-supervision has emerged as an alternative to leveraging unlabeled data. In this paper, we propose a new light-weight self-supervised learning framework that could boost supervised learning performance with minimum additional…
Figures and Tables from this paper
References
SHOWING 1-10 OF 27 REFERENCES
Multi-task Self-Supervised Visual Learning
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
The results show that deeper networks work better, and that combining tasks—even via a na¨ýve multihead architecture—always improves performance.
Self-supervised Co-training for Video Representation Learning
- Computer ScienceNeurIPS
- 2020
This paper investigates the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation (InfoNCE) training, and proposes a novel self-supervised co-training scheme to improve the popular infoNCE loss.
Unsupervised Representation Learning by Predicting Image Rotations
- Computer ScienceICLR
- 2018
This work proposes to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input, and demonstrates both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning.
Cross-Domain Self-Supervised Multi-task Feature Learning Using Synthetic Imagery
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A novel multi-task deep network to learn generalizable high-level visual representations based on adversarial learning is proposed and it is demonstrated that the network learns more transferable representations compared to single-task baselines.
Multi-Task Self-Supervised Object Detection via Recycling of Bounding Box Annotations
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
A novel object detection approach that takes advantage of both multi-task learning (MTL) and self-supervised learning (SSL) to improve the accuracy of object detection and empirically validate that this approach effectively improves detection performance on various architectures and datasets.
Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2016
While features learned with this approach cannot compete with class specific features from supervised training on a classification task, it is shown that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor.
Unsupervised Visual Representation Learning by Context Prediction
- Computer Science2015 IEEE International Conference on Computer Vision (ICCV)
- 2015
It is demonstrated that the feature representation learned using this within-image context indeed captures visual similarity across images and allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset.
Context Encoders: Feature Learning by Inpainting
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.
Integrated perception with recurrent multi-task neural networks
- Computer ScienceNIPS
- 2016
This work proposes a new architecture, which it calls "MultiNet", in which not only deep image features are shared between tasks, but where tasks can interact in a recurrent manner by encoding the results of their analysis in a common shared representation of the data.
A Simple Framework for Contrastive Learning of Visual Representations
- Computer ScienceICML
- 2020
It is shown that composition of data augmentations plays a critical role in defining effective predictive tasks, and introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning.