Corpus ID: 236318485

Improving the Generalization of Meta-learning on Unseen Domains via Adversarial Shift

@article{Tian2021ImprovingTG,
  title={Improving the Generalization of Meta-learning on Unseen Domains via Adversarial Shift},
  author={Pinzhuo Tian and Yao Gao},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.11056}
}
Meta-learning provides a promising way for learning to efficiently learn and achieves great success in many applications. However, most meta-learning literature focuses on dealing with tasks from a same domain, making it brittle to generalize to tasks from the other unseen domains. In this work, we address this problem by simulating tasks from the other unseen domains to improve the generalization and robustness of meta-learning method. Specifically, we propose a model-agnostic shift layer to… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 57 REFERENCES
Generalizing to Unseen Domains via Adversarial Data Augmentation
TLDR
This work proposes an iterative procedure that augments the dataset with examples from a fictitious target domain that is "hard" under the current model, and shows that the method is an adaptive data augmentation method where the authors append adversarial examples at each iteration. Expand
Domain-Adversarial Training of Neural Networks
TLDR
A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. Expand
Generalizing Across Domains via Cross-Gradient Training
TLDR
Empirical evaluation on three different applications establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbations methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training. Expand
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learningExpand
Deep Transfer Learning with Joint Adaptation Networks
TLDR
JAN is presented, which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Expand
Episodic Training for Domain Generalization
TLDR
Using the Visual Decathlon benchmark, it is demonstrated that the episodic-DG training improves the performance of such a general purpose feature extractor by explicitly training a feature for robustness to novel problems, showing that DG training can benefit standard practice in computer vision. Expand
Drop to Adapt: Learning Discriminative Features for Unsupervised Domain Adaptation
TLDR
Drop to Adapt (DTA), which leverages adversarial dropout to learn strongly discriminative features by enforcing the cluster assumption, is proposed, which design objective functions to support robust domain adaptation. Expand
Explain and Improve: Cross-Domain Few-Shot-Learning Using Explanations
Cross-domain few-shot learning (CD-FSL) has attracted much interest recently. In CD-FSL, we need to address not only the issue of limited labeled data in each class but also the domain shift betweenExpand
Meta-learning with differentiable closed-form solvers
TLDR
The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data. Expand
MetaReg: Towards Domain Generalization using Meta-Regularization
TLDR
Experimental validations on computer vision and natural language datasets indicate that the encoding of the notion of domain generalization using a novel regularization function using a Learning to Learn (or) meta-learning framework can learn regularizers that achieve good cross-domain generalization. Expand
...
1
2
3
4
5
...