- Full text PDF available (11)
- This year (0)
- Last 5 years (1)
- Last 10 years (5)
Journals and Conferences
Data Set Used
We propose a solution to the challenge of the CoNLL 2008 shared task that uses a generative history-based latent variable model to predict the most likely derivation of a synchronous dependency parser for both syntactic and semantic dependencies. The submitted model yields 79.1% macroaverage F1 performance, for the joint task, 86.9% syntactic dependencies… (More)
This paper investigates a generative history-based parsing model that synchronises the derivation of non-planar graphs representing semantic dependencies with the derivation of dependency trees representing syntactic structures. To process non-planarity online, the semantic transition-based parser uses a new technique to dynamically reorder nodes during the… (More)
 Dan Klein and Christopher D. Manning. Fast exact inference with a factored model for natural language parsing. Another more practical line of activity includes an error analysis to identify the classes of errors done by the two algorithms, so that strategies to cope with them can be designed. For Collins' parsers this would imply the introduction of… (More)
In this paper, we extend an existing parser to produce richer output annotated with function labels. We obtain state-of-the-art results both in function labelling and in parsing, by automatically relabelling the Penn Treebank trees. In particular, we obtain the best published results on semantic function labels. This suggests that current statistical… (More)
In this paper, we explore two extensions to an existing statistical parsing model to produce richer parse trees, annotated with function labels. We achieve significant improvements in parsing by modelling directly the specific nature of function labels, as both expressions of the lexical semantics properties of a constituent and as syntactic elements whose… (More)
This paper investigates transforms of split dependency grammars into unlexicalised context-free grammars annotated with hidden symbols. Our best unlexicalised grammar achieves an accuracy of 88% on the Penn Treebank data set, that represents a 50% reduction in error over previously published results on unlexicalised dependency parsing.
Current investigations in data-driven models of parsing have shifted from purely syntactic analysis to richer semantic representations, showing that the successful recovery of the meaning of text requires structured analyses of both its grammar and its semantics. In this article, we report on a joint generative history-based model to predict the most likely… (More)
In this paper, we report experiments that explore learning of syntactic and semantic representations. First, we extend a state-of-the-art statistical parser to produce a richly annotated tree that identifies and labels nodes with semantic role labels as well as syntactic labels. Secondly, we explore rule-based and learning techniques to extract… (More)
We integrate PropBank semantic role labels to an existing statistical parsing model producing richer output. We show conclusive results on joint learning and inference of syntactic and semantic representations.
In this paper, we extend an existing statistical parsing model to produce richer output parse trees, annotated with PropBank semantic role labels. Our results show that the model can be robustly extended to produce more complex output parse trees without any loss in performance and suggest that joint inference of syntactic and semantic representations is a… (More)