Learn More
We present a constituent shift-reduce parser with a structured perceptron that finds the optimal parse in a practical run-time. The key ideas are new feature templates that facilitate state merging of dynamic programming and A* search. Our system achieves 91.1 F1 on a standard English experiment, a level which cannot be reached by other beam-based systems(More)
Explaining the syntactic variation and universals including the constraints on that variation across languages in the world is essential both from a theoretical and practical point of view. It is in fact one of the main goals in linguistics. In computational linguistics, these kinds of syntactic regularities and constraints could be utilized as prior(More)
Universal Dependencies (UD) is becoming a standard annotation scheme cross-linguistically, but it is argued that this scheme centering on content words is harder to parse than the conventional one centering on function words. To improve the parsability of UD, we propose a back-and-forth conversion algorithm, in which we preprocess the training treebank to(More)
We propose a new A* CCG parsing model in which the probability of a tree is decomposed into factors of CCG categories and its syntactic dependencies both defined on bi-directional LSTMs. Our factored model allows the precomputation of all probabilities and runs very efficiently, while mod-eling sentence structures explicitly via dependencies. Our model(More)
Center-embedding is difficult to process and is known as a rare syntactic construction across languages. In this paper we describe a method to incorporate this assumption into the grammar induction tasks by restricting the search space of a model to trees with limited center-embedding. The key idea is the tabulation of left-corner parsing, which captures(More)
  • 1