Colorless green recurrent networks dream hierarchically

@inproceedings{Gulordava2018ColorlessGR,
  title={Colorless green recurrent networks dream hierarchically},
  author={Kristina Gulordava and P. Bojanowski and E. Grave and Tal Linzen and M. Baroni},
  booktitle={NAACL-HLT},
  year={2018}
}
  • Kristina Gulordava, P. Bojanowski, +2 authors M. Baroni
  • Published in NAACL-HLT 2018
  • Computer Science
  • Recurrent neural networks (RNNs) have achieved impressive results in a variety of linguistic processing tasks, suggesting that they can induce non-trivial properties of language. We investigate here to what extent RNNs learn to track abstract hierarchical syntactic structure. We test whether RNNs trained with a generic language modeling objective in four languages (Italian, English, Hebrew, Russian) can predict long-distance number agreement in various constructions. We include in our… CONTINUE READING
    Deep RNNs Encode Soft Hierarchical Syntax
    • 60
    • Open Access
    What Does BERT Look At? An Analysis of BERT's Attention
    • 226
    • Open Access
    What do you learn from context? Probing for sentence structure in contextualized word representations
    • 203
    • Open Access
    The Importance of Being Recurrent for Modeling Hierarchical Structure
    • 73
    • Open Access
    RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency
    • 21
    • Highly Influenced
    • Open Access
    Universal Language Model Fine-tuning for Text Classification
    • 1,098
    • Open Access
    The emergence of number and syntax units in LSTM language models
    • 44
    • Open Access
    A Structural Probe for Finding Syntax in Word Representations
    • 192
    • Open Access
    An Analysis of Encoder Representations in Transformer-Based Machine Translation
    • 75
    • Open Access

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 46 REFERENCES
    Finding Structure in Time
    • 8,366
    • Open Access
    Long Short-Term Memory
    • 30,522
    • Open Access
    Linguistic Regularities in Continuous Space Word Representations
    • 2,486
    • Highly Influential
    • Open Access
    Distributed representations, simple recurrent networks, and grammatical structure
    • 655
    • Open Access
    Learning and development in neural networks: the importance of starting small
    • 1,480
    • Open Access
    Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
    • 371
    • Open Access
    Recurrent neural network based language model
    • 3,929
    • Open Access
    Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks
    • 241
    • Open Access
    Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
    • 2,830
    • Open Access
    Toward a connectionist model of recursion in human linguistic performance
    • 159
    • Open Access