The TUNA-REG Challenge 2009: Overview and Evaluation Results

@inproceedings{Gatt2009TheTC,
  title={The TUNA-REG Challenge 2009: Overview and Evaluation Results},
  author={Albert Gatt and A. Belz and Eric Kow},
  booktitle={ENLG},
  year={2009}
}
The GREC Task at REG '08 required participating systems to select coreference chains to the main subject of short encyclopaedic texts collected from Wikipedia. Three teams submitted a total of 6 systems, and we additionally created four baseline systems. Systems were tested automatically using a range of existing intrinsic metrics. We also evaluated systems extrinsically by applying coreference resolution tools to the outputs and measuring the success of the tools. In addition, systems were… Expand
104 Citations

Figures, Tables, and Topics from this paper

Offline Sentence Processing Measures for testing Readability with Users
  • 10
  • PDF
The WebNLG Challenge: Generating Text from RDF Data
  • 122
  • PDF
Introducing shared task evaluation to NLG : The TUNA shared task evaluation challenges
  • 34
  • PDF
Introducing Shared Tasks to NLG: The TUNA Shared Task Evaluation Challenges
  • 51
Generation of Dutch referring expressions using the D-TUNA corpus
  • PDF
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 19 REFERENCES
The TUNA Challenge 2008: Overview and Evaluation Results
  • 50
  • PDF
The Attribute Selection for GRE Challenge: Overview and Evaluation Results
  • 51
  • PDF
On Coreference Resolution Performance Metrics
  • 467
  • PDF
Intrinsic vs. Extrinsic Evaluation Measures for Referring Expression Generation
  • 44
  • PDF
A model-theoretic coreference scoring scheme
  • 684
  • PDF
Generation of repeated references to discourse entities
  • 24
  • PDF
Entity-driven Rewrite for Multi-document Summarization
  • 25
  • PDF
Two uses of anaphora resolution in summarization
  • 152
  • PDF
...
1
2
...