Ranking Distance Calibration for Cross-Domain Few-Shot Learning

  title={Ranking Distance Calibration for Cross-Domain Few-Shot Learning},
  author={Pan Li and Shaogang Gong and Yanwei Fu and Chengjie Wang},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  • Pan LiS. Gong Chengjie Wang
  • Published 1 December 2021
  • Computer Science
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Recent progress in few-shot learning promotes a more realistic cross-domain setting, where the source and target datasets are in different domains. Due to the domain gap and disjoint label spaces between source and target datasets, their shared knowledge is extremely limited. This encourages us to explore more information in the target domain rather than to overly elaborate training strategies on the source domain as in many existing methods. Hence, we start from a generic representation pre… 

Figures and Tables from this paper

Cross-Domain Few-Shot Classification via Inter-Source Stylization

Inter-Source stylization network (ISSNet) is proposed for this new Multisource CDFSC setting (MCDFSC), which transfers the styles of unlabeled sources to labeled source, which expands the distribution of labeled source and further improves the model generalization ability.

ME-D2N: Multi-Expert Domain Decompositional Network for Cross-Domain Few-Shot Learning

To solve the data imbalance problem between the source data with sufficient examples and the auxiliary target data with limited examples, a novel Multi-Expert Domain Decompositional Network (ME-D2N) is built under the umbrella of multi-expert learning.



Self-training for Few-shot Transfer Across Extreme Task Differences

This paper presents a simple and effective solution to tackle this extreme domain gap: self-training a source domain representation on unlabeled data from the target domain, and shows that this improves one-shot performance on the target domains.

Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation

The core idea is to use feature-wise transformation layers for augmenting the image features using affine transforms to simulate various feature distributions under different domains in the training stage, and applies a learning-to-learn approach to search for the hyper-parameters of the feature- wise transformation layers.

Large Margin Mechanism and Pseudo Query Set on Cross-Domain Few-Shot Learning

A novel large margin fine-tuning method (LMM-PQS), which generates pseudo query images from support images and fine-tunes the feature extraction modules with a large margin mechanism inspired by methods in face recognition, is proposed.

ConFeSS: A Framework for Single Source Cross-Domain Few-Shot Learning

A framework for few-shot learning coined as ConFeSS (Contrastive Learning and Feature Selection System) that tackles large domain shift between base and novel categories and outperforms all meta-learning approaches and produces competitive results against recent cross-domain methods is proposed.

Generalized Meta-FDMixup: Cross-Domain Few-Shot Learning Guided by Labeled Target Data

A novel Generalized Meta-learning based Feature-Disentangled Mixup network, namely GMeta-FDMixup is proposed to address Cross-Domain Few-Shot Learning (CD-FSL), and a novel feature disentanglement module is contributed to narrow the domain gap explicitly.

Cross-Domain Few-Shot Learning by Representation Fusion

This work proposes Cross-domain Hebbian Ensemble Few-shot learning (CHEF), which achieves representation fusion by an ensemble of Hebbians acting on different layers of a deep neural network that was trained on the original domain, which significantly outperforms all its competitors on cross-domain few-shot benchmark challenges with larger domain shifts.

A Universal Representation Transformer Layer for Few-Shot Image Classification

A Universal Representation Transformer (URT) layer is proposed, that meta-learns to leverage universal features for few-shot classification by dynamically re-weighting and composing the most appropriate domain-specific representations.

A Broader Study of Cross-Domain Few-Shot Learning

The proposed Broader Study of Cross-Domain Few-Shot Learning (BSCD-FSL) benchmark, consisting of image data from a diverse assortment of image acquisition methods, demonstrates that state-of-art meta- learning methods are surprisingly outperformed by earlier meta-learning approaches, and all meta- Learning methods underperform in relation to simple fine-tuning.

Boosting the Generalization Capability in Cross-Domain Few-shot Learning via Noise-enhanced Supervised Autoencoder

This work addresses the cross-domain few-shot learning (CDFSL) problem by boosting the generalization capability of the model by teaching the model to capture broader variations of the feature distributions with a novel noise-enhanced supervised autoencoder (NSAE).

Few-Shot Image Classification via Contrastive Self-Supervised Learning

This paper solves the few-shot tasks in two phases: meta-training a transferable feature extractor via contrastive self-supervised learning and training a classifier using graph aggregation, self-distillation and manifold augmentation.