The Importance of Being Recurrent for Modeling Hierarchical Structure

@inproceedings{Tran2018TheIO,
  title={The Importance of Being Recurrent for Modeling Hierarchical Structure},
  author={Ke M. Tran and Arianna Bisazza and Christof Monz},
  booktitle={EMNLP},
  year={2018}
}
  • Ke M. Tran, Arianna Bisazza, Christof Monz
  • Published in EMNLP 2018
  • Computer Science
  • Recent work has shown that recurrent neural networks (RNNs) can implicitly capture and exploit hierarchical information when trained to solve common natural language processing tasks such as language modeling (Linzen et al., 2016) and neural machine translation (Shi et al., 2016). In contrast, the ability to model structured data with non-recurrent neural networks has received little attention despite their success in many NLP tasks (Gehring et al., 2017; Vaswani et al., 2017). In this work, we… CONTINUE READING
    84 Citations

    Figures, Tables, and Topics from this paper

    Ordered Memory
    • 5
    • PDF
    Modeling Recurrence for Transformer
    • 46
    • PDF
    Assessing the Ability of Self-Attention Networks to Learn Word Order
    • 17
    • Highly Influenced
    • PDF
    On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages
    • PDF

    References

    SHOWING 1-10 OF 19 REFERENCES
    Recurrent Memory Networks for Language Modeling
    • 69
    • PDF
    Tree-Structured Composition in Neural Networks without Tree-Structured Architectures
    • 49
    • Highly Influential
    • PDF
    Deep RNNs Encode Soft Hierarchical Syntax
    • 63
    • PDF
    Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling
    • 281
    • PDF
    Colorless green recurrent networks dream hierarchically
    • 224
    • PDF
    Using the Output Embedding to Improve Language Models
    • 456
    • PDF
    Can Neural Networks Understand Logical Entailment?
    • 80
    • PDF
    Attention is All you Need
    • 15,859
    • Highly Influential
    • PDF
    Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
    • 441
    • Highly Influential
    • PDF
    Convolutional Sequence to Sequence Learning
    • 1,821
    • Highly Influential
    • PDF