Corpus ID: 203902487

Classification As Decoder: Trading Flexibility For Control In Neural Dialogue

@article{Shleifer2019ClassificationAD,
  title={Classification As Decoder: Trading Flexibility For Control In Neural Dialogue},
  author={Sam Shleifer and M. Chablani and Namit Katariya and A. Kannan and X. Amatriain},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.03476}
}
Generative seq2seq dialogue systems are trained to predict the next word in dialogues that have already occurred. They can learn from large unlabeled conversation datasets, build a deeper understanding of conversational context, and generate a wide variety of responses. This flexibility comes at the cost of control, a concerning tradeoff in doctor/patient interactions. Inaccuracies, typos, or undesirable content in the training data will be reproduced by the model at inference time. We trade a… Expand

References

SHOWING 1-10 OF 26 REFERENCES
Wizard of Wikipedia: Knowledge-Powered Conversational agents
Language Models are Unsupervised Multitask Learners
What makes a good conversation? How controllable attributes affect human judgments
Language Models as Knowledge Bases?
...
1
2
3
...