Improving extractive dialogue summarization by utilizing human feedback

Abstract

Automatic summarization systems usually are trained and evaluated in a particular domain with fixed data sets. When such a system is to be applied to slightly different input, laborand cost-intensive annotations have to be created to retrain the system. We deal with this problem by providing users with a GUI which allows them to correct automatically… (More)

Topics

3 Figures and Tables

Cite this paper

@inproceedings{Mieskes2007ImprovingED, title={Improving extractive dialogue summarization by utilizing human feedback}, author={Margot Mieskes and Christoph Mueller and Michael Strube}, booktitle={Artificial Intelligence and Applications}, year={2007} }