Discourse-Aware Neural Rewards for Coherent Text Generation

@inproceedings{Bosselut2018DiscourseAwareNR,
  title={Discourse-Aware Neural Rewards for Coherent Text Generation},
  author={Antoine Bosselut and Asli Çelikyilmaz and Xiaodong He and Jianfeng Gao and Po-Sen Huang and Yejin Choi},
  booktitle={NAACL-HLT},
  year={2018}
}
In this paper, we investigate the use of discourse-aware rewards with reinforcement learning to guide a model to generate long, coherent text. In particular, we propose to learn neural rewards to model cross-sentence ordering as a means to approximate desired discourse structure. Empirical results demonstrate that a generator trained with the learned reward produces more coherent and less repetitive text than models trained with cross-entropy or with reinforcement learning with commonly used… CONTINUE READING
4
Twitter Mentions

Topics from this paper.

References

Publications referenced by this paper.
SHOWING 1-10 OF 39 REFERENCES

Self-Critical Sequence Training for Image Captioning

  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
VIEW 12 EXCERPTS
HIGHLY INFLUENTIAL

ROUGE: A Package For Automatic Evaluation Of Summaries

  • ACL 2004
  • 2004
VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL

Deep Reinforcement Learning-Based Image Captioning with Embedding Reward

  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
VIEW 2 EXCERPTS