Neural Structural Correspondence Learning for Domain Adaptation

@article{Ziser2017NeuralSC,
  title={Neural Structural Correspondence Learning for Domain Adaptation},
  author={Yftah Ziser and Roi Reichart},
  journal={ArXiv},
  year={2017},
  volume={abs/1610.01588}
}
Domain adaptation, adapting models from domains rich in labeled training data to domains poor in such data, is a fundamental NLP challenge. We introduce a neural network model that marries together ideas from two prominent strands of research on domain adaptation through representation learning: structural correspondence learning (SCL, (Blitzer et al., 2006)) and autoencoder neural networks. Particularly, our model is a three-layer neural network that learns to encode the nonpivot features of… 

Figures and Tables from this paper

Neural Unsupervised Domain Adaptation in NLP—A Survey

TLDR
This survey reviews neural unsupervised domain adaptation techniques which do not require labeled target domain data, and revisits the notion of domain, and uncovers a bias in the type of Natural Language Processing tasks which received most attention.

Projecting Embeddings for Domain Adaption: Joint Modeling of Sentiment Analysis in Diverse Domains

TLDR
This work provides a novel perspective and cast the domain adaptation problem as an embedding projection task, which takes as input two mono-domain embedding spaces and learns to project them to a bi-domain space, which is jointly optimized to project across domains and to predict sentiment.

Pivot Based Language Modeling for Improved Neural Domain Adaptation

TLDR
The Pivot Based Language Model is presented, a representation learning model that marries together pivot-based and NN modeling in a structure aware manner and can naturally feed structure aware text classifiers such as LSTM and CNN.

Unsupervised Domain Adaptation for Natural Language Processing

TLDR
This dissertation proposes a new approach to achieve domain adaptation, based on BERT, which consists of a continuation of pretraining through masked language modeling on the data derived from the target domain, and a final fine-tuning step on source labeled data.

Knowledge distillation for BERT unsupervised domain adaptation

TLDR
This work proposes a simple but effective unsupervised domain adaptation method, adversarial adaptation with distillation (AAD), which combines the adversarial discriminative domain adaptation (ADDA) framework with knowledge distillation.

Transfer Learning in Natural Language Processing

TLDR
An overview of modern transfer learning methods in NLP, how models are pre-trained, what information the representations they learn capture, and review examples and case studies on how these models can be integrated and adapted in downstream NLP tasks are presented.

Deep Pivot-Based Modeling for Cross-language Cross-domain Transfer with Minimal Guidance

TLDR
This work proposes a framework that builds on pivot-based learning, structure-aware Deep Neural Networks and bilingual word embeddings, with the goal of training a model on labeled data from one language, domain so that it can be effectively applied to another (language, domain) pair.

The ℓ _2, 1 -Norm Stacked Robust Autoencoders via Adaptation Regularization for Domain Adaptation

TLDR
This work proposes a novel algorithm termed \(\ell _{2,1}\)-norm stacked robust autoencoders via adaptation regularization (SRAAR) to learn better feature representations for domain adaptation that incorporates an effective manifold regularization into the objective function to preserve the local geometry structure of data.

Example-based Hypernetworks for Out-of-Distribution Generalization

TLDR
This paper addresses the problem of multi-source adaptation to unknown domains given labeled data from multiple source domains and presents an algorithmic framework based on example-based Hypernetwork adaptation, the first time Hypernetworks are applied to domain adaptation or in example- based manner in NLP.

Transformer Based Multi-Source Domain Adaptation

TLDR
It is found that domain adversarial training has an effect on the learned representations of these models while having little effect on their performance, suggesting that large transformer-based models are already relatively robust across domains.
...

References

SHOWING 1-10 OF 39 REFERENCES

Domain-Adversarial Training of Neural Networks

TLDR
A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.

Unsupervised Domain Adaptation by Backpropagation

TLDR
The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.

Domain Adaptation with Structural Correspondence Learning

TLDR
This work introduces structural correspondence learning to automatically induce correspondences among features from different domains in order to adapt existing models from a resource-rich source domain to aresource-poor target domain.

Fast Easy Unsupervised Domain Adaptation with Marginalized Structured Dropout

TLDR
This work proposes a new technique called marginalized structured dropout, which exploits feature structure to obtain a remarkably simple and efficient feature projection in the context of fine-grained part-of-speech tagging on a dataset of historical Portuguese.

A Domain Adaptation Regularization for Denoising Autoencoders

TLDR
This work suggests a more appropriate regularization for denoising autoencoders for domain adaptation, which remains unsupervised and can be computed in a closed form on standard text classification adaptation tasks.

Marginalized Denoising Autoencoders for Domain Adaptation

TLDR
The approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters--in fact, they are computed in closed-form, significantly speeds up SDAs by two orders of magnitude.

Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach

TLDR
A deep learning approach is proposed which learns to extract a meaningful representation for each review in an unsupervised fashion and clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products.

Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification

TLDR
This work extends to sentiment classification the recently-proposed structural correspondence learning (SCL) algorithm, reducing the relative error due to adaptation between domains by an average of 30% over the original SCL algorithm and 46% over a supervised baseline.

Unsupervised Cross-Domain Word Representation Learning

TLDR
The proposed method significantly outperforms competitive baselines including the state-of-the-art domain-insensitive word representations, and reports best sentiment classification accuracies for all domain-pairs in a benchmark dataset.

Learning Sentence Embeddings with Auxiliary Tasks for Cross-Domain Sentiment Classification

TLDR
This paper borrows the idea from Structural Correspondence Learning and uses two auxiliary tasks to help induce a sentence embedding that supposedly works well across domains for sentiment classification and proposes to jointly learn this sentenceembedding together with the sentiment classI-class itself.