Challenging Neural Dialogue Models with Natural Data: Memory Networks Fail on Incremental Phenomena

@article{Shalyminov2017ChallengingND,
  title={Challenging Neural Dialogue Models with Natural Data: Memory Networks Fail on Incremental Phenomena},
  author={Igor Shalyminov and Arash Eshghi and Oliver Lemon},
  journal={CoRR},
  year={2017},
  volume={abs/1709.07840}
}
Natural, spontaneous dialogue proceeds incrementally on a word-by-word basis; and it contains many sorts of disfluency such as mid-utterance/sentence hesitations, interruptions, and self-corrections. But training data for machine learning approaches to dialogue processing is often either cleaned-up or wholly synthetic in order to avoid such phenomena. The question then arises of how well systems trained on such clean data generalise to real spontaneous dialogue, or indeed whether they are… CONTINUE READING

From This Paper

Figures and tables from this paper.

References

Publications referenced by this paper.
Showing 1-10 of 29 references

Learning End-to-End Goal-Oriented Dialog

View 14 Excerpts
Highly Influenced

Dylan: Parser for dynamic syntax

A. Eshghi, M. Purver, Julian Hough.
Technical report, Queen Mary University of London. • 2011
View 10 Excerpts
Highly Influenced

Bootstrapping incremental dialogue systems from minimal data: linguistic knowledge or machine learning

Arash Eshghi, Igor Shalyminov, Oliver Lemon
In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing • 2017
View 6 Excerpts

DS-TTR: An incremental, semantic, contextual parser for dialogue

Arash Eshghi.
Proceedings of Semdial 2015 (goDial), the 19th workshop on the semantics and pragmatics of dialogue. • 2015

Similar Papers

Loading similar papers…