• Corpus ID: 215754690

Multi-Source Attention for Unsupervised Domain Adaptation

@inproceedings{Cui2020MultiSourceAF,
  title={Multi-Source Attention for Unsupervised Domain Adaptation},
  author={Xia Cui and Danushka Bollegala},
  booktitle={AACL},
  year={2020}
}
We model source-selection in multi-source Unsupervised Domain Adaptation (UDA) as an attention-learning problem, where we learn attention over the sources per given target instance. We first independently learn source-specific classification models, and a relatedness map between sources and target domains using pseudo-labelled target domain instances. Next, we learn domain-attention scores over the sources for aggregating the predictions of the source-specific models. Experimental results on… 

Figures and Tables from this paper

A Teacher-Student Approach to Cross-Domain Transfer Learning with Multi-level Attention

TLDR
This work proposes a novel teacher-student approach with multi-task learning that transfers the information from source domains to the target domain with sophisticated weights determined by using attention mechanism at both the instance level and the domain level.

FRuDA: Framework for Distributed Adversarial Domain Adaptation

TLDR
Evaluation of FruDA with five image and speech datasets shows that it can boost target domain accuracy by up to 50% and improve the training efficiency of adversarial uDA by at least 11 times.

Multiple-Source Domain Adaptation via Coordinated Domain Encoders and Paired Classifiers

TLDR
A novel multiple-source unsupervised model for text classification under domain shift that employs a probabilistic heuristic to infer the error rate in the target domain in order to pair source classifiers and is the top performing approach in this setting.

References

SHOWING 1-10 OF 46 REFERENCES

Multi-Source Domain Adaptation with Mixture of Experts

TLDR
A mixture-of-experts approach for unsupervised domain adaptation from multiple sources to explicitly capture the relationship between a target example and different source domains, expressed by a point-to-set metric.

Sentiment Domain Adaptation with Multiple Sources

TLDR
This paper proposes a new domain adaptation approach which can exploit sentiment knowledge from multiple source domains using multi-task learning and shows the effectiveness of this approach in improving cross-domain sentiment classification performance.

Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach

TLDR
A deep learning approach is proposed which learns to extract a meaningful representation for each review in an unsupervised fashion and clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products.

Analysis of Representations for Domain Adaptation

TLDR
The theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set.

TSP: Learning Task-Specific Pivots for Unsupervised Domain Adaptation

TLDR
This work proposes a method for learning Task-Specific Pivots (TSPs) in a systematic manner by considering both the labelled and unlabelled data available from both domains, and evaluates TSPs against pivots selected using alternatives in two cross-domain sentiment classification applications.

Adversarial Multiple Source Domain Adaptation

TLDR
This paper proposes multisource domain adversarial networks (MDAN) that approach domain adaptation by optimizing task-adaptive generalization bounds and conducts extensive experiments showing superior adaptation performance on both classification and regression problems: sentiment analysis, digit classification, and vehicle counting.

Domain Attention with an Ensemble of Experts

TLDR
This work describes a solution based on attending an ensemble of domain experts to adapt without having to re-estimate a global model from scratch each time a new domain with potentially new intents and slots is added.

Strong Baselines for Neural Semi-Supervised Learning under Domain Shift

TLDR
This paper re-evaluate classic general-purpose bootstrapping approaches in the context of neural networks under domain shifts vs. recent neural approaches and proposes a novel multi-task tri-training method that reduces the time and space complexity of classic tri- training.

Multi-Source Domain Adaptation for Text Classification via DistanceNet-Bandits

TLDR
A study of various distance-based measures in the context of NLP tasks, that characterize the dissimilarity between domains based on sample estimates, and develops a DistanceNet model which uses these distance measures as an additional loss function to be minimized jointly with the task's loss function, so as to achieve better unsupervised domain adaptation.

End-to-End Adversarial Memory Network for Cross-domain Sentiment Classification

TLDR
An end-to-end Adversarial Memory Network (AMN) is introduced for cross-domain sentiment classification that can automatically capture the pivots using an attention mechanism and can significantly outperform state-of-the-art methods.