Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network


We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation , (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.

Extracted Key Phrases

1 Figure or Table

Showing 1-10 of 1,410 extracted citations
Citations per Year

2,230 Citations

Semantic Scholar estimates that this publication has received between 2,049 and 2,430 citations based on the available data.

See our FAQ for additional information.