Semi-Supervised Sequence Modeling with Cross-View Training

  title={Semi-Supervised Sequence Modeling with Cross-View Training},
  author={Kevin Clark and Minh-Thang Luong and Christopher D. Manning and Quoc V. Le},
Unsupervised representation learning algorithms such as word2vec and ELMo improve the accuracy of many supervised NLP models, mainly because they can take advantage of large amounts of unlabeled text. However, the supervised models only learn from taskspecific labeled data during the main training phase. We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data… CONTINUE READING
Twitter Mentions

Similar Papers

Loading similar papers…