Parser evaluation using textual entailments

@article{Yuret2013ParserEU,
  title={Parser evaluation using textual entailments},
  author={Deniz Yuret and Laura Rimell and Aydin Han},
  journal={Language Resources and Evaluation},
  year={2013},
  volume={47},
  pages={639-659}
}
Parser Evaluation using Textual Entailments (PETE) is a shared task in the SemEval-2010 Evaluation Exercises on Semantic Evaluation. The task involves recognizing textual entailments based on syntactic information alone. PETE introduces a new parser evaluation scheme that is formalism independent, less prone to annotation error, and focused on semantically relevant distinctions. This paper describes the PETE task, gives an error analysis of the top-performing Cambridge system, and introduces a… CONTINUE READING
Recent Discussions
This paper has been referenced on Twitter 1 time over the past 90 days. VIEW TWEETS

References

Publications referenced by this paper.
Showing 1-10 of 29 references

Stanford typed dependencies manual. URL http://nlp

  • M De Marneffe, C Manning
  • 2008
2 Excerpts

Similar Papers

Loading similar papers…