Colorless green recurrent networks dream hierarchically

@article{Gulordava2018ColorlessGR,
  title={Colorless green recurrent networks dream hierarchically},
  author={Kristina Gulordava and Piotr Bojanowski and Edouard Grave and Tal Linzen and Marco Baroni},
  journal={ArXiv},
  year={2018},
  volume={abs/1803.11138}
}
Recurrent neural networks (RNNs) have achieved impressive results in a variety of linguistic processing tasks, suggesting that they can induce non-trivial properties of language. We investigate here to what extent RNNs learn to track abstract hierarchical syntactic structure. We test whether RNNs trained with a generic language modeling objective in four languages (Italian, English, Hebrew, Russian) can predict long-distance number agreement in various constructions. We include in our… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 59 CITATIONS, ESTIMATED 95% COVERAGE

Targeted Syntactic Evaluation of Language Models

VIEW 8 EXCERPTS
CITES BACKGROUND, RESULTS & METHODS
HIGHLY INFLUENCED

Assessing BERT's Syntactic Abilities

  • ArXiv
  • 2019
VIEW 16 EXCERPTS
CITES BACKGROUND, RESULTS & METHODS
HIGHLY INFLUENCED

Jabberwocky Parsing: Dependency Parsing with Lexical Noise

VIEW 5 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

The emergence of number and syntax units in LSTM language models

  • NAACL-HLT
  • 2019
VIEW 7 EXCERPTS
CITES RESULTS, METHODS & BACKGROUND
HIGHLY INFLUENCED

Can LSTM Learn to Capture Agreement? The Case of Basque

VIEW 6 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

FILTER CITATIONS BY YEAR

2018
2019

CITATION STATISTICS

  • 19 Highly Influenced Citations

  • Averaged 30 Citations per year from 2018 through 2019

References

Publications referenced by this paper.

Similar Papers