Multinomial Adversarial Networks for Multi-Domain Text Classification

@inproceedings{Chen2018MultinomialAN,
  title={Multinomial Adversarial Networks for Multi-Domain Text Classification},
  author={Xilun Chen and Claire Cardie},
  booktitle={North American Chapter of the Association for Computational Linguistics},
  year={2018}
}
Many text classification tasks are known to be highly domain-dependent. Unfortunately, the availability of training data can vary drastically across domains. Worse still, for some domains there may not be any annotated data at all. In this work, we propose a multinomial adversarial network (MAN) to tackle this real-world problem of multi-domain text classification (MDTC) in which labeled data may exist for multiple domains, but in insufficient amounts to train effective classifiers for one or… 

Figures and Tables from this paper

Dual Adversarial Co-Learning for Multi-Domain Text Classification

The approach learns shared-private networks for feature extraction and deploys dual adversarial regularizations to align features across different domains and between labeled and unlabeled data simultaneously under a discrepancy based co-learning framework, aiming to improve the classifiers' generalization capacity with the learned features.

Co-Regularized Adversarial Learning for Multi-Domain Text Classification

This work proposes a co-regularized adversarial learning (CRAL) mechanism for MDTC that constructs two diverse shared latent spaces, performs domain alignment in each of them, and punishes the disagreements of these two alignments with respect to the predictions on unlabeled data.

Mixup Regularized Adversarial Networks for Multi-Domain Text Classification

  • Yuan WuD. InkpenAhmed El-Roby
  • Computer Science
    ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2021
The domain and category mixup regularizations are introduced to enrich the intrinsic features in the shared latent space and enforce consistent predictions in-between training instances such that the learned features can be more domain-invariant and discriminative.

Conditional Adversarial Networks for Multi-Domain Text Classification

This paper provides theoretical analysis for the CAN framework, showing that CAN’s objective is equivalent to minimizing the total divergence among multiple joint distributions of shared features and label predictions and is a theoretically sound adversarial network that discriminates over multiple distributions.

A Label Proportions Estimation Technique for Adversarial Domain Adaptation in Text Classification

This study focuses on unsupervised domain adaptation of text classification with label shift and introduces a domain adversarial network with label proportions estimation (DAN-LPE) framework.

Multi-domain Transfer Learning for Text Classification

A generic dual-channels multi-task learning framework for multi-domain text classification, which can capture global- shared, local-shared, and private features simultaneously, and achieves better results than five state-of-the-art techniques.

A Curriculum Learning Approach for Multi-domain Text Classification Using Keyword weight Ranking

The experimental results on the Amazon review and FDU-MTL datasets show that the curriculum learning strategy effectively improves the performance of multi-domain text classification models based on adversarial learning and outperforms state-of-the-art methods.

Maximum Batch Frobenius Norm for Multi-Domain Text Classification

  • Yuan WuD. InkpenAhmed El-Roby
  • Computer Science
    ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2022
A maximum batch Frobenius norm (MBF) method is proposed to boost the feature discriminability for MDTC and experiments show that this approach can effectively advance state-of-the-art performance.

Discovering and Controlling for Latent Confounds in Text Classification Using Adversarial Domain Adaptation

The approach first uses neural network-based topic modeling to discover potential confounds that differ between training and testing data, then uses adversarial training to fit a classification model that is invariant to these discovered confounds.

A Two-Stage Multi-task Learning-Based Method for Selective Unsupervised Domain Adaptation

A two-stage domain adaptation framework is proposed for MS-UDA, which not only outperforms unsupervised state-of-the-art competitors but also is very close to supervised methods and even better than supervised methods on some tasks.
...

References

SHOWING 1-10 OF 35 REFERENCES

Adversarial Multi-task Learning for Text Classification

This paper proposes an adversarial multi-task learning framework, alleviating the shared and private latent feature spaces from interfering with each other, and shows that the shared knowledge learned can be regarded as off-the-shelf knowledge and easily transferred to new tasks.

Adversarial Deep Averaging Networks for Cross-Lingual Sentiment Classification

An Adversarial Deep Averaging Network (ADAN1) is proposed to transfer the knowledge learned from labeled data on a resource-rich source language to low-resource languages where only unlabeled data exist.

Multiple Source Domain Adaptation with Adversarial Training of Neural Networks

A new generalization bound for domain adaptation when there are multiple source domains with labeled instances and one target domain with unlabeled instances is proposed, which does not require expert knowledge about the target distribution, nor the optimal combination rule for multisource domains.

Domain Separation Networks

The novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process.

Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification

This work extends to sentiment classification the recently-proposed structural correspondence learning (SCL) algorithm, reducing the relative error due to adaptation between domains by an average of 30% over the original SCL algorithm and 46% over a supervised baseline.

Are GANs Created Equal? A Large-Scale Study

A neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures finds that most models can reach similar scores with enough hyperparameter optimization and random restarts, suggesting that improvements can arise from a higher computational budget and tuning more than fundamental algorithmic changes.

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a

Least Squares Generative Adversarial Networks

This paper proposes the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator, and shows that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence.

Marginalized Denoising Autoencoders for Domain Adaptation

The approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters--in fact, they are computed in closed-form, significantly speeds up SDAs by two orders of magnitude.

Deep Unordered Composition Rivals Syntactic Methods for Text Classification

This work presents a simple deep neural network that competes with and, in some cases, outperforms such models on sentiment analysis and factoid question answering tasks while taking only a fraction of the training time.