Learning to Write with Cooperative Discriminators

@inproceedings{Holtzman2018LearningTW,
  title={Learning to Write with Cooperative Discriminators},
  author={Ari Holtzman and Jan Buys and Maxwell Forbes and Antoine Bosselut and David Golub and Yejin Choi},
  booktitle={ACL},
  year={2018}
}
Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models, but when used to generate natural language their output tends to be overly generic, repetitive, and self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the notion of communicative goals described by linguistic principles such as Grice's Maxims. We propose learning a mixture of multiple… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 21 CITATIONS

Learning to Explain: Answering Why-Questions via Rephrasing

VIEW 3 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

Neural Text Generation with Unlikelihood Training

Sean Welleck, Ilia Kulikov, +3 authors Jason Weston
  • ArXiv
  • 2019
VIEW 1 EXCERPT
CITES BACKGROUND

References

Publications referenced by this paper.
SHOWING 1-10 OF 49 REFERENCES

Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books

  • 2015 IEEE International Conference on Computer Vision (ICCV)
  • 2015
VIEW 8 EXCERPTS
HIGHLY INFLUENTIAL

A Joint Speaker-Listener-Reinforcer Model for Referring Expressions

  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
VIEW 6 EXCERPTS
HIGHLY INFLUENTIAL