Towards Novel Target Discovery Through Open-Set Domain Adaptation
@article{Jing2021TowardsNT, title={Towards Novel Target Discovery Through Open-Set Domain Adaptation}, author={Taotao Jing and Hong Liu and Zhengming Ding}, journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)}, year={2021}, pages={9302-9311} }
Open-set domain adaptation (OSDA) considers that the target domain contains samples from novel categories unobserved in external source domain. Unfortunately, existing OSDA methods always ignore the demand for the information of unseen categories and simply recognize them as "un-known" set without further explanation. This motivates us to understand the unknown categories more specifically by exploring the underlying structures and recovering their interpretable semantic attributes. In this…
Figures and Tables from this paper
4 Citations
One Ring to Bring Them All: Towards Open-Set Recognition under Domain Shift
- Computer ScienceArXiv
- 2022
This paper proposes a novel training scheme to learn a novel classifier to predict the n source classes and the unknown class, where samples of only known source categories are available for training and adopts a weighted entropy minimization to adapt the source pretrained model to the unlabeled target domain without source data.
Unsupervised Domain Adaptation for Semantic Image Segmentation: a Comprehensive Survey
- Computer ScienceArXiv
- 2021
This survey is an effort to summarize five years of this incredibly rapidly growing field, which embraces the importance of semantic segmentation itself and a critical need of adapting segmentation models to new environments.
Reiterative Domain Aware Multi-target Adaptation
- Computer ScienceArXiv
- 2021
Reiterative D- CGCT (RD-CGCT) is proposed that obtains better adaptation performance by reiterating multiple times over each target domain, while keeping the total number of iterations as same.
Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation
- Computer ScienceIEEE Transactions on Image Processing
- 2021
This work proposes a Towards Fair Knowledge Transfer (TFKT) framework to handle the fairness challenge in imbalanced cross-domain learning, and especially the model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
References
SHOWING 1-10 OF 63 REFERENCES
Attract or Distract: Exploit the Margin of Open Set
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
This paper exploits the semantic structure of open set data from two aspects: 1)Semantic Categorical Alignment, which aims to achieve good separability of target known classes by categorically aligning the centroid of target with the source, and 2) Semantic Contrastive Mapping,Which aims to push the unknown class away from the decision boundary.
Learning Feature-to-Feature Translator by Alternating Back-Propagation for Generative Zero-Shot Learning
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
This work investigates learning feature-to-feature translator networks by alternating back-propagation as a general-purpose solution to zero-shot learning (ZSL) problems using a generative model-based ZSL framework that outperforms the existing state-of-the-art methods by a remarkable margin.
Open Set Domain Adaptation
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
This work learns a mapping from the source to the target domain by jointly solving an assignment problem that labels those target instances that potentially belong to the categories of interest present in the source dataset.
Zero-Shot Learning — The Good, the Bad and the Ugly
- Computer Science2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
A new benchmark is defined by unifying both the evaluation protocols and data splits for zero-shot learning, and a significant number of the state-of-the-art methods are compared and analyzed in depth, both in the classic zero- shot setting but also in the more realistic generalized zero-shots setting.
Deep Residual Learning for Image Recognition
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Region Graph Embedding Network for Zero-Shot Learning
- Computer ScienceECCV
- 2020
This paper models the relations among local image regions as a region graph, on which the parts relation reasoning is performed with graph convolutions, thus leading to the PRR branch, which is incorporated into ZSL.
Leveraging Seen and Unseen Semantic Relationships for Generative Zero-Shot Learning
- Computer ScienceECCV
- 2020
The novel LsrGAN is proposed, a generative model that Leverages the Semantic Relationship between seen and unseen categories and explicitly performs knowledge transfer by incorporating a novel Semantic Regularized Loss (SR-Loss).
Embedding Propagation: Smoother Manifold for Few-Shot Classification
- Computer ScienceECCV
- 2020
This work empirically shows that embedding propagation yields a smoother embedding manifold, and shows that applying embedding propagate to a transductive classifier achieves new state-of-the-art results in mini-Imagenet, tiered-Imageet, Imagenet-FS, and CUB.
Attribute Attention for Semantic Disambiguation in Zero-Shot Learning
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
By distracting semantic activation in dimensions that cause ambiguity, this method outperforms existing state-of-the-art methods on AwA2, CUB and SUN datasets in both inductive and transductive settings.
Zero-shot Learning via Simultaneous Generating and Learning
- Computer ScienceNeurIPS
- 2019
A deep generative model is presented to provide the model with experience about both seen and unseen classes to overcome the absence of training data for unseen classes and achieve the outperforming results compared to that trained only on the seen classes, and also to the several state-of-the-art methods.