Corpus ID: 16232037

The Effects of Human Variation in DUC Summarization Evaluation

@inproceedings{Harman2004TheEO,
  title={The Effects of Human Variation in DUC Summarization Evaluation},
  author={D. Harman and P. Over},
  year={2004}
}
There is a long history of research in automatic text summarization systems by both the text retrieval and the natural language processing communities, but evaluation of such systems’ output has always presented problems. One critical problem remains how to handle the unavoidable variability in human judgments at the core of all the evaluations. Sponsored by the DARPA TIDES project, NIST launched a new text summarization evaluation effort, called DUC, in 2001 with follow-on workshops in 2002… Expand
Overview of DUC 2005
Automatic Summary Evaluation without Human Models
DUC in context
Automatic Summarization
...
1
2
3
4
5
...

References

SHOWING 1-6 OF 6 REFERENCES
Manual and automatic evaluation of summaries
Summarization Evaluation Methods: Experiments and Analysis
Variations in relevance judgments and the measurement of retrieval effectiveness
Sentence Level Discourse Parsing using Syntactic and Lexical Information