Topics to Avoid: Demoting Latent Confounds in Text Classification

@inproceedings{Kumar2019TopicsTA,
  title={Topics to Avoid: Demoting Latent Confounds in Text Classification},
  author={Shashi B. Kumar and Shuly Wintner and Noah A. Smith and Yulia Tsvetkov},
  booktitle={EMNLP/IJCNLP},
  year={2019}
}
Despite impressive performance on many text classification tasks, deep neural networks tend to learn frequent superficial patterns that are specific to the training data and do not always generalize well. [...] Key Method We propose a method that represents the latent topical confounds and a model which "unlearns" confounding features by predicting both the label of the input text and the confound; but we train the two predictors adversarially in an alternating fashion to learn a text representation that…Expand Abstract

References

Publications referenced by this paper.
SHOWING 1-10 OF 42 REFERENCES

Generative Adversarial Nets

VIEW 3 EXCERPTS
HIGHLY INFLUENTIAL

A Report on the 2017 Native Language Identification Shared Task

VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL

Domain-Adversarial Training of Neural Networks

VIEW 6 EXCERPTS
HIGHLY INFLUENTIAL

Attention is not Explanation

VIEW 1 EXCERPT

Attention is not not Explanation

VIEW 1 EXCERPT

Is Attention Interpretable?

VIEW 1 EXCERPT

Multiple-Attribute Text Rewriting

VIEW 2 EXCERPTS