Ranking Distance Calibration for Cross-Domain Few-Shot Learning

@article{Li2021RankingDC,
  title={Ranking Distance Calibration for Cross-Domain Few-Shot Learning},
  author={Pan Li and Shaogang Gong and Yanwei Fu and Chengjie Wang},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={9089-9098}
}
  • Pan LiS. Gong Chengjie Wang
  • Published 1 December 2021
  • Computer Science
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Recent progress in few-shot learning promotes a more realistic cross-domain setting, where the source and target datasets are in different domains. Due to the domain gap and disjoint label spaces between source and target datasets, their shared knowledge is extremely limited. This encourages us to explore more information in the target domain rather than to overly elaborate training strategies on the source domain as in many existing methods. Hence, we start from a generic representation pre… 

Figures and Tables from this paper

Cross-Domain Few-Shot Classification via Inter-Source Stylization

Inter-Source stylization network (ISSNet) is proposed for this new Multisource CDFSC setting (MCDFSC), which transfers the styles of unlabeled sources to labeled source, which expands the distribution of labeled source and further improves the model generalization ability.

k -NN embeded space conditioning for enhanced few-shot object detection

A novel and flexible few-shot object detection approach which can be adapted effortlessly to any candidate-based object detection framework and which leverages a kFEW retrieval technique over the regions of interest space to build both a class-distribution and a weighted aggregated embedding conditioned by the recovered neighbours.

ME-D2N: Multi-Expert Domain Decompositional Network for Cross-Domain Few-Shot Learning

To solve the data imbalance problem between the source data with sufficient examples and the auxiliary target data with limited examples, a novel Multi-Expert Domain Decompositional Network (ME-D2N) is built under the umbrella of multi-expert learning.

References

SHOWING 1-10 OF 56 REFERENCES

Self-training for Few-shot Transfer Across Extreme Task Differences

This paper presents a simple and effective solution to tackle this extreme domain gap: self-training a source domain representation on unlabeled data from the target domain, and shows that this improves one-shot performance on the target domains.

Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation

The core idea is to use feature-wise transformation layers for augmenting the image features using affine transforms to simulate various feature distributions under different domains in the training stage, and applies a learning-to-learn approach to search for the hyper-parameters of the feature- wise transformation layers.

Large Margin Mechanism and Pseudo Query Set on Cross-Domain Few-Shot Learning

A novel large margin fine-tuning method (LMM-PQS), which generates pseudo query images from support images and fine-tunes the feature extraction modules with a large margin mechanism inspired by methods in face recognition, is proposed.

ConFeSS: A Framework for Single Source Cross-Domain Few-Shot Learning

A framework for few-shot learning coined as ConFeSS (Contrastive Learning and Feature Selection System) that tackles large domain shift between base and novel categories and outperforms all meta-learning approaches and produces competitive results against recent cross-domain methods is proposed.

Cross-Domain Few-Shot Learning by Representation Fusion

This work proposes Cross-domain Hebbian Ensemble Few-shot learning (CHEF), which achieves representation fusion by an ensemble of Hebbians acting on different layers of a deep neural network that was trained on the original domain, which significantly outperforms all its competitors on cross-domain few-shot benchmark challenges with larger domain shifts.

A Universal Representation Transformer Layer for Few-Shot Image Classification

A Universal Representation Transformer (URT) layer is proposed, that meta-learns to leverage universal features for few-shot classification by dynamically re-weighting and composing the most appropriate domain-specific representations.

A Broader Study of Cross-Domain Few-Shot Learning

The proposed Broader Study of Cross-Domain Few-Shot Learning (BSCD-FSL) benchmark, consisting of image data from a diverse assortment of image acquisition methods, demonstrates that state-of-art meta- learning methods are surprisingly outperformed by earlier meta-learning approaches, and all meta- Learning methods underperform in relation to simple fine-tuning.

Boosting the Generalization Capability in Cross-Domain Few-shot Learning via Noise-enhanced Supervised Autoencoder

This work addresses the cross-domain few-shot learning (CDFSL) problem by boosting the generalization capability of the model by teaching the model to capture broader variations of the feature distributions with a novel noise-enhanced supervised autoencoder (NSAE).

Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?

It is shown that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, followed by training a linear classifier on top of this representation, outperforms state-of-the-art few-shot learning methods.

A Closer Look at Few-shot Classification

The results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones, and a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.
...