• Corpus ID: 238857255

Semi-supervised Multi-task Learning for Semantics and Depth

@article{Wang2021SemisupervisedML,
  title={Semi-supervised Multi-task Learning for Semantics and Depth},
  author={Yufeng Wang and Yi-Hsuan Tsai and Wei-Chih Hung and Wenrui Ding and Shuo Liu and Ming-Hsuan Yang},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.07197}
}
Multi-Task Learning (MTL) aims to enhance the model generalization by sharing representations between related tasks for better performance. Typical MTL methods are jointly trained with the complete multitude of ground-truths for all tasks simultaneously. However, one single dataset may not contain the annotations for each task of interest. To address this issue, we propose the Semi-supervised MultiTask Learning (SemiMTL) method to leverage the available supervisory signals from different… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 75 REFERENCES
Cross-Domain Self-Supervised Multi-task Feature Learning Using Synthetic Imagery
TLDR
A novel multi-task deep network to learn generalizable high-level visual representations based on adversarial learning is proposed and it is demonstrated that the network learns more transferable representations compared to single-task baselines.
Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics
TLDR
A principled approach to multi-task deep learning is proposed which weighs multiple loss functions by considering the homoscedastic uncertainty of each task, allowing us to simultaneously learn various quantities with different units or scales in both classification and regression settings.
Learning to Adapt Structured Output Space for Semantic Segmentation
TLDR
A multi-level adversarial network is constructed to effectively perform output space domain adaptation at different feature levels and it is shown that the proposed method performs favorably against the state-of-the-art methods in terms of accuracy and visual quality.
Instance-Level Task Parameters: A Robust Multi-task Weighting Framework
TLDR
Every instance in the dataset is equipped with a set of learnable parameters (instance-level task parameters) where the cardinality is equal to the number of tasks learned by the model, and this approach outperforms recent dynamic loss weighting approaches.
PAD-Net: Multi-tasks Guided Prediction-and-Distillation Network for Simultaneous Depth Estimation and Scene Parsing
TLDR
This paper proposes a novel multi-task guided prediction-and-distillation network (PAD-Net), which first predicts a set of intermediate auxiliary tasks ranging from low level to high level, and then the predictions from these intermediate Auxiliary tasks are utilized as multi-modal input via the authors' proposed multi- modal distillation modules for the final tasks.
Learning Across Tasks and Domains
TLDR
A novel adaptation framework that can operate across both task and domains and is complementary to existing domain adaptation techniques and extends them to cross tasks scenarios providing additional performance gains is introduced.
Realistic Evaluation of Deep Semi-Supervised Learning Algorithms
TLDR
This work creates a unified reimplemention and evaluation platform of various widely-used SSL techniques and finds that the performance of simple baselines which do not use unlabeled data is often underreported, that SSL methods differ in sensitivity to the amount of labeled and unlabeling data, and that performance can degrade substantially when the unlabelED dataset contains out-of-class examples.
AdaDepth: Unsupervised Content Congruent Adaptation for Depth Estimation
TLDR
The proposed AdaDepth - an unsupervised domain adaptation strategy for the pixel-wise regression task of monocular depth estimation performs competitively with other established approaches on depth estimation tasks and achieves state-of-the-art results in a semi-supervised setting.
Semi Supervised Semantic Segmentation Using Generative Adversarial Network
TLDR
A semi-supervised framework is proposed – based on Generative Adversarial Networks (GANs) – which consists of a generator network to provide extra training examples to a multi-class classifier, acting as discriminator in the GAN framework, that assigns sample a label y from the K possible classes or marks it as a fake sample (extra class).
Cross-Stitch Networks for Multi-task Learning
TLDR
This paper proposes a principled approach to learn shared representations in Convolutional Networks using multitask learning using a new sharing unit: "cross-stitch" unit that combines the activations from multiple networks and can be trained end-to-end.
...
1
2
3
4
5
...