Yonatan Bisk

Learn More
In this paper we present new state-of-the-art performance on CCG supertagging and parsing. Our model outperforms existing approaches by an absolute gain of 1.5%. We analyze the performance of several neural models and demonstrate that while feed-forward architectures can compete with bidirectional LSTMs on POS tagging, models that encode the complete(More)
The Natural Language Processing, Artificial Intelligence, and Robotics fields have made significant progress towards developing robust component technologies (speech recognition/synthesis, machine translation, image recognition); advanced inference mechanisms that accommodate uncertainty and noise; and autonomous driving systems that operate seamlessly on(More)
Work in grammar induction should help shed light on the amount of syntactic structure that is discoverable from raw word or tag sequences. But since most current grammar induction algorithms produce unlabeled dependencies, it is difficult to analyze what types of constructions these algorithms can or cannot capture, and, therefore, to identify where(More)