What Do Recurrent Neural Network Grammars Learn About Syntax?

@inproceedings{Kuncoro2017WhatDR,
  title={What Do Recurrent Neural Network Grammars Learn About Syntax?},
  author={Adhiguna Kuncoro and Miguel Ballesteros and Lingpeng Kong and Chris Dyer and Graham Neubig and Noah A. Smith},
  booktitle={EACL},
  year={2017}
}
Recurrent neural network grammars (RNNG) are a recently proposed probabilistic generative modeling family for natural language. They show state-ofthe-art language modeling and parsing performance. We investigate what information they learn, from a linguistic perspective, through various ablations to the model and the data, and by augmenting the model with an attention mechanism (GA-RNNG) to enable closer inspection. We find that explicit modeling of composition is crucial for achieving the best… CONTINUE READING
Highly Cited
This paper has 74 citations. REVIEW CITATIONS
Recent Discussions
This paper has been referenced on Twitter 96 times over the past 90 days. VIEW TWEETS
44 Citations
36 References
Similar Papers

Citations

Publications citing this paper.

74 Citations

02040201620172018
Citations per Year
Semantic Scholar estimates that this publication has 74 citations based on the available data.

See our FAQ for additional information.

Similar Papers

Loading similar papers…