Single-document and multi-document summary evaluation using Relative Utility


We present a series of experiments to demonstrate the validity of Relative Utility (RU) as a measure for evaluating extractive summarization systems. Like some other evaluation metrics, it compares sentence selection between machine and reference summarizers. Additionally, RU is applicable in both single-document and multi-document summarization, is extendable to arbitrary compression rates with no extra annotation effort, and takes into account both random system performance and interjudge agreement. RU also provides an option for penalizing summaries that include sentences with redundant information. Our results are based on the JHU summary corpus and indicate that Relative Utility is a reasonable, and often superior alternative to several common summary evaluation metrics. We also give a comparison of RU with some other well-known metrics with respect to the correlation with the human judgements on the DUC corpus.

Extracted Key Phrases

27 Figures and Tables

Cite this paper

@inproceedings{Radev2003SingledocumentAM, title={Single-document and multi-document summary evaluation using Relative Utility}, author={Dragomir R. Radev and Daniel Tam and G{\"{u}nes Erkan}, year={2003} }