Meta-Learning Adversarial Domain Adaptation Network for Few-Shot Text Classification

@inproceedings{Han2021MetaLearningAD,
  title={Meta-Learning Adversarial Domain Adaptation Network for Few-Shot Text Classification},
  author={Chengcheng Han and Zeqiu Fan and Dongxiang Zhang and Minghui Qiu and Ming Gao and Aoying Zhou},
  booktitle={FINDINGS},
  year={2021}
}
Meta-learning has emerged as a trending technique to tackle few-shot text classification and achieved state-of-the-art performance. However, existing solutions heavily rely on the exploitation of lexical features and their distributional signatures on training data, while neglecting to strengthen the model’s ability to adapt to new tasks. In this paper, we propose a novel meta-learning framework integrated with an adversarial domain adaptation network, aiming to improve the adaptive ability of… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 34 REFERENCES
Representation Learning for Improved Generalization of Adversarial Domain Adaptation with Text Classification
  • Alaa Khaddaj, Hazem M. Hajj
  • Computer Science
  • 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT)
  • 2020
TLDR
A new approach called Domain Adversarial network with Representation Learning (DARL) is presented, to improve domain adaptation by introducing an encoding layer as part of DARL model learning, which can extract descriptive features under noisy conditions while still learning task discriminative features. Expand
Diverse Few-Shot Text Classification with Multiple Metrics
TLDR
This work proposes an adaptive metric learning approach that automatically determines the best weighted combination from a set of metrics obtained from meta-training tasks for a newly seen few-shot task. Expand
Induction Networks for Few-Shot Text Classification
TLDR
This paper proposes a novel Induction Network to learn a generalized class-wise representation of each class in the support set, by innovatively leveraging the dynamic routing algorithm in meta-learning and finds the model is able to induce and generalize better. Expand
Few-Shot Transfer Learning for Text Classification With Lightweight Word Embedding Based Models
TLDR
A modified hierarchical pooling strategy over pre-trained word embeddings is proposed for text classification in a few- shot transfer learning way and exhibits significant classification performance in the few-shot transfer learning tasks compared with other alternative methods. Expand
Effective Few-Shot Classification with Transfer Learning
TLDR
It is suggested that the classes in the ARSC few-shot task, which are defined by the intersection of domain and rating, are actually very similar to each other, and that a more suitable dataset is needed for the study of few- shot text classification. Expand
Few-shot Text Classification with Distributional Signatures
TLDR
This paper demonstrates that this model consistently outperforms prototypical networks learned on lexical knowledge in both few-shot text classification and relation classification by a significant margin across six benchmark datasets. Expand
Hybrid Attention-Based Prototypical Networks for Noisy Few-Shot Relation Classification
TLDR
This paper designs instancelevel and feature-level attention schemes based on prototypical networks to highlight the crucial instances and features respectively, which significantly enhances the performance and robustness of RC models in a noisy FSL scenario. Expand
Task Agnostic Meta-Learning for Few-Shot Learning
TLDR
An entropy-based approach that meta-learns an unbiased initial model with the largest uncertainty over the output labels by preventing it from over-performing in classification tasks, which outperforms compared meta-learning algorithms in both few-shot classification and reinforcement learning tasks. Expand
TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning
TLDR
TapNets, neural networks augmented with task-adaptive projection for improved few-shot learning by employing a meta-learning strategy with episode-based training, a network and a set of per-class reference vectors are learned across widely varying tasks. Expand
Adversarial Discriminative Domain Adaptation
TLDR
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task. Expand
...
1
2
3
4
...