Learning Accurate, Compact, and Interpretable Tree Annotation

Abstract

We present an automatic approach to tree annotation in which basic nonterminal symbols are alternately split and merged to maximize the likelihood of a training treebank. Starting with a simple Xbar grammar, we learn a new grammar whose nonterminals are subsymbols of the original nonterminals. In contrast with previous work, we are able to split various terminals to different degrees, as appropriate to the actual complexity in the data. Our grammars automatically learn the kinds of linguistic distinctions exhibited in previous work on manual tree annotation. On the other hand, our grammars are much more compact and substantially more accurate than previous work on automatic annotation. Despite its simplicity, our best grammar achieves an F1 of 90.2% on the Penn Treebank, higher than fully lexicalized systems.

Extracted Key Phrases

7 Figures and Tables

050100'06'07'08'09'10'11'12'13'14'15'16'17
Citations per Year

865 Citations

Semantic Scholar estimates that this publication has 865 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Petrov2006LearningAC, title={Learning Accurate, Compact, and Interpretable Tree Annotation}, author={Slav Petrov and Leon Barrett and Romain Thibaux and Dan Klein}, booktitle={ACL}, year={2006} }