Pareto Self-Supervised Training for Few-Shot Learning

@article{Chen2021ParetoST,
  title={Pareto Self-Supervised Training for Few-Shot Learning},
  author={Zhengyu Chen and Jixie Ge and Heshen Zhan and Siteng Huang and Donglin Wang},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={13658-13667}
}
  • Zhengyu ChenJixie Ge Donglin Wang
  • Published 16 April 2021
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
While few-shot learning (FSL) aims for rapid generalization to new concepts with little supervision, self-supervised learning (SSL) constructs supervisory signals directly computed from unlabeled data. Exploiting the complementarity of these two manners, few-shot auxiliary learning has recently drawn much attention to deal with few labeled data. Previous works benefit from sharing inductive bias between the main task (FSL) and auxiliary tasks (SSL), where the shared parameters of tasks are… 

Figures and Tables from this paper

Rethinking the Metric in Few-shot Learning: From an Adaptive Multi-Distance Perspective

This paper investigates the contributions of different distance metrics, proposes an adaptive fusion scheme, and designs a few-shot classification framework AMTNet, including the AMM and the Global Adaptive Loss, to jointly optimize the few- shot task and auxiliary self-supervised task, making the embedding features more robust.

Deep Transfer Tensor Decomposition with Orthogonal Constraint for Recommender Systems

This work proposes a deep transfer tensor decomposition (DTTD) method, which is the first work in Tucker decomposition based recommendations to use deep structure to incorporate the side information and cross-domain knowledge.

RankDNN: Learning to Rank for Few-shot Learning

A new few-shot learning pipeline that casts relevance ranking for image retrieval as binary ranking relation classification, and can effectively improve the performance of its baselines based on a variety of backbones and outperforms previous state-of-the-art algorithms on multiple few-shots benchmarks.

Rethinking Generalization in Few-Shot Classification

This work builds on recent advances in unsupervised training of networks via masked image modelling to overcome the lack of fine-grained labels and learn the more general statistical structure of the data while avoiding negative image-level annotation influence, aka supervision collapse.

Multiform Ensemble Self-Supervised Learning for Few-Shot Remote Sensing Scene Classification

A multiform ensemble self-supervised learning (MES2L) framework for FSRSSC is proposed and a novel global–local contrastive learning auxiliary task to solve the low interclass separability problem is designed.

Knowledge Graph enhanced Multimodal Learning for Few-shot Visual Recognition

Experimental results demonstrate the effectiveness of the multimodal information for few-shot learning, and the proposed method can significantly outperform the state-of-the-art approaches.

tSF: Transformer-Based Semantic Filter for Few-Shot Learning

The proposed tSF redesigns the inputs of a transformer-based structure by a semantic filter, which not only embeds the knowledge from whole base set to novel set but also filters semantic features for target category.

Boosting Few-shot Learning by Self-calibration in Feature Space

A self-calibration framework is proposed to construct improved image-level features by progressively performing local alignment based on a self-supervised Transformer by calibrating the biased features of novel samples conditioned on a fixed feature extractor through an auxiliary network.

Few-Shot Classification with Contrastive Learning

A novel Contrastive learning-based framework that seamlessly integrates contrastive learning into both stages to improve the performance of few-shot classification and achieves competitive results.

KSG: Knowledge and Skill Graph

A noval dynamic knowledge and skill graph (KSG), which can search for different agents' skills in various environments and provide transferable information for acquiring new skills, is proposed and developed based on CN-DBpedia.

References

SHOWING 1-10 OF 51 REFERENCES

Prototypical Networks for Few-shot Learning

This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.

Efficient Continuous Pareto Exploration in Multi-Task Learning

This work proposes a sample-based sparse linear system, for which standard Hessian-free solvers in machine learning can be applied and reveals the primary directions in local Pareto sets for trade-off balancing, finds more solutions with different trade-offs efficiently, and scales well to tasks with millions of parameters.

When Does Self-supervision Improve Few-shot Learning?

This work investigates the role of self-supervised learning in the context of few-shot learning and presents a technique that automatically selects images for SSL from a large, generic pool of unlabeled images for a given dataset that provides further improvements.

Boosting Few-Shot Visual Learning With Self-Supervision

This work uses self-supervision as an auxiliary task in a few-shot learning pipeline, enabling feature extractors to learn richer and more transferable visual representations while still using few annotated samples.

Steepest descent methods for multicriteria optimization

A steepest descent method for unconstrained multicriteria optimization and a “feasible descent direction” method for the constrained case, both of which converge to a point satisfying certain first-order necessary conditions for Pareto optimality.

Matching Networks for One Shot Learning

This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.

Pareto Multi-Task Learning

Experimental results confirm that the proposed Pareto MTL algorithm can generate well-representative solutions and outperform some state-of-the-art algorithms on many multi-task learning applications.

Multi-Initialization Meta-Learning with Domain Adaptation

  • Zhengyu ChenDonglin Wang
  • Computer Science
    ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2021
This work proposes multi-initialization meta-learning with domain adaptation (MIML-DA), which consists of a modulation network and a novel meta separation network (MSN), where the modulation network is to encode tasks into common and private modulation vectors, and then MSN uses these vectors separately to update the cross-domain meta-learner via a double-gradient descent process.

Deep Transfer Tensor Decomposition with Orthogonal Constraint for Recommender Systems

This work proposes a deep transfer tensor decomposition (DTTD) method, which is the first work in Tucker decomposition based recommendations to use deep structure to incorporate the side information and cross-domain knowledge.

Improving Cold-Start Recommendation via Multi-prior Meta-learning

...