Learn More
Disambiguating named entities in natural-language text maps mentions of ambiguous names onto canonical entities like people or places, registered in a knowledge base such as DBpedia or YAGO. This paper presents a robust method for collective disambiguation, by harnessing context from knowledge bases and using a new form of coherence graph. It unifies prior(More)
We present a syntactically enriched vector model that supports the computation of contextualized semantic representations in a quasi compositional fashion. It employs a systematic combination of first-and second-order context vectors. We apply our model to two different tasks and show that (i) it substantially outperforms previous work on a paraphrase(More)
Large-scale annotated corpora are a prerequisite to developing high-performance semantic role labeling systems. Unfortunately, such corpora are expensive to produce, limited in size, and may not be representative. Our work aims to reduce the annotation effort involved in creating resources for semantic role labeling via semi-supervised learning. The key(More)
We present a model that represents word meaning in context by vectors which are modified according to the words in the tar-get's syntactic context. Contextualization of a vector is realized by reweighting its components, based on distributional information about the context words. Evaluation on a paraphrase ranking task derived from the SemEval 2007 Lexical(More)
Unknown lexical items present a major obstacle to the development of broad-coverage semantic role labeling systems. We address this problem with a semi-supervised learning approach which acquires training instances for unseen verbs from an unlabeled corpus. Our method relies on the hypothesis that unknown lexical items will be structurally and semantically(More)
We present a method for learning syntax-semantics mappings for verbs from unanno-tated corpora. We learn linkings, i.e., map-pings from the syntactic arguments and adjuncts of a verb to its semantic roles. By learning such linkings, we do not need to model individual semantic roles independently of one another, and we can exploit the relation between(More)
Meta-learning involves the construction of a classifier that predicts the performance of another classifier. Previously proposed approaches do this by making a single prediction (such as the expected accuracy) for a complete data set. We suggest modifying this framework so that the meta-classifier predicts for each data point in the data set whether a(More)
  • 1