LSTMs Exploit Linguistic Attributes of Data

@inproceedings{Liu2018LSTMsEL,
  title={LSTMs Exploit Linguistic Attributes of Data},
  author={Nelson F. Liu and Omer Levy and Roy Schwartz and Chenhao Tan and Noah A. Smith},
  booktitle={Rep4NLP@ACL},
  year={2018}
}
  • Nelson F. Liu, Omer Levy, +2 authors Noah A. Smith
  • Published in Rep4NLP@ACL 2018
  • Computer Science
  • While recurrent neural networks have found success in a variety of natural language processing applications, they are general models of sequential data. We investigate how the properties of natural language data affect an LSTM's ability to learn a nonlinguistic task: recalling elements from its input. We find that models trained on natural language data are able to recall tokens from much longer sequences than models trained on non-language sequential data. Furthermore, we show that the LSTM… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    33
    Twitter Mentions

    Figures and Topics from this paper.

    Citations

    Publications citing this paper.
    SHOWING 1-6 OF 6 CITATIONS

    Analysis Methods in Neural Language Processing: A Survey

    VIEW 2 EXCERPTS
    CITES BACKGROUND

    Language Models Learn POS First

    VIEW 1 EXCERPT
    CITES BACKGROUND

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 23 REFERENCES

    Building a Large Annotated Corpus of English: The Penn Treebank

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Does String-Based Neural MT Learn Source Syntax?

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    Why Neural Translations are the Right Length

    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL

    Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies

    VIEW 1 EXCERPT