SemEval

Known as: Senseval, Word Sense Induction and Disambiguation task, Multilingual and Crosslingual WSD 
SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems; it evolved from the Senseval word sense… (More)
Wikipedia

Papers overview

Semantic Scholar uses AI to extract papers important to this topic.
Highly Cited
2016
Highly Cited
2016
This paper describes the SemEval–2016 Task 3 on Community Question Answering, which we offered in English and Arabic. For English… (More)
  • figure 1
  • figure 3
  • table 1
  • figure 4
  • table 2
Is this relevant?
Highly Cited
2015
Highly Cited
2015
In this paper, we describe the 2015 iteration of the SemEval shared task on Sentiment Analysis in Twitter. This was the most… (More)
  • figure 1
  • table 1
  • table 3
  • table 2
  • table 4
Is this relevant?
Highly Cited
2012
Highly Cited
2012
Semantic Textual Similarity (STS) measures the degree of semantic equivalence between two texts. This paper presents the results… (More)
  • figure 1
  • figure 2
  • table 1
  • table 2
  • table 3
Is this relevant?
Highly Cited
2010
Highly Cited
2010
Tempeval-2 comprises evaluation tasks for time expressions, events and temporal relations, the latter of which was split up in… (More)
  • figure 1
  • table 1
  • table 2
  • table 4
  • table 3
Is this relevant?
Highly Cited
2010
Highly Cited
2010
This paper describes Task 5 of the Workshop on Semantic Evaluation 2010 (SemEval-2010). Systems are to automatically assign… (More)
  • table 1
  • table 2
  • table 3
  • table 4
  • table 5
Is this relevant?
Highly Cited
2010
Highly Cited
2010
This paper presents the description and evaluation framework of SemEval-2010 Word Sense Induction & Disambiguation task, as well… (More)
  • figure 1
  • table 1
  • table 2
  • table 3
  • table 4
Is this relevant?
Highly Cited
2007
Highly Cited
2007
The “Affective Text” task focuses on the classification of emotions and valence (positive/negative polarity) in news headlines… (More)
  • table 1
  • table 2
  • table 3
Is this relevant?
Highly Cited
2007
Highly Cited
2007
This paper presents the coarse-grained English all-words task at SemEval-2007. We describe our experience in producing a coarse… (More)
  • table 1
  • table 2
  • table 4
  • table 3
  • table 6
Is this relevant?
Highly Cited
2007
Highly Cited
2007
The TempEval task proposes a simple way to evaluate automatic extraction of temporal relations. It avoids the pitfalls of… (More)
  • table 1
  • table 2
  • table 3
  • table 4
Is this relevant?
Highly Cited
2007
Highly Cited
2007
In this paper we describe the English Lexical Substitution task for SemEval. In the task, annotators and systems find an… (More)
  • table 1
  • table 4
  • table 5
  • table 6
  • table 7
Is this relevant?