Simple Semi-supervised Dependency Parsing

Abstract

We present a simple and effective semi-supervised method for training dependency parsers. We focus on the problem of lexical representation, introducing features that incorporate word clusters derived from a large unannotated corpus. We demonstrate the effectiveness of the approach in a series of dependency parsing experiments on the Penn Treebank and Prague Dependency Treebank, and we show that the cluster-based features yield substantial gains in performance across a wide range of conditions. For example, in the case of English unlabeled second-order parsing, we improve from a baseline accuracy of 92.02% to 93.16%, and in the case of Czech unlabeled second-order parsing, we improve from a baseline accuracy of 86.13% to 87.13%. In addition, we demonstrate that our method also improves performance when small amounts of training data are available, and can roughly halve the amount of supervised data required to reach a desired level of performance.

Extracted Key Phrases

10 Figures and Tables

Showing 1-10 of 311 extracted citations
0204060802008200920102011201220132014201520162017
Citations per Year

411 Citations

Semantic Scholar estimates that this publication has received between 357 and 479 citations based on the available data.

See our FAQ for additional information.