Simple Semi-supervised Dependency Parsing

Abstract

We present a simple and effective semisupervised method for training dependency parsers. We focus on the problem of lexical representation, introducing features that incorporate word clusters derived from a large unannotated corpus. We demonstrate the effectiveness of the approach in a series of dependency parsing experiments on the Penn Treebank and Prague Dependency Treebank, and we show that the cluster-based features yield substantial gains in performance across a wide range of conditions. For example, in the case of English unlabeled second-order parsing, we improve from a baseline accuracy of 92.02% to 93.16%, and in the case of Czech unlabeled second-order parsing, we improve from a baseline accuracy of 86.13% to 87.13%. In addition, we demonstrate that our method also improves performance when small amounts of training data are available, and can roughly halve the amount of supervised data required to reach a desired level of performance.

View Slides

Extracted Key Phrases

9 Figures and Tables

0204060802008200920102011201220132014201520162017
Citations per Year

467 Citations

Semantic Scholar estimates that this publication has 467 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Koo2008SimpleSD, title={Simple Semi-supervised Dependency Parsing}, author={Terry Koo and Xavier Carreras and Michael Collins}, booktitle={ACL}, year={2008} }