Corpus ID: 167217728

Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)

@article{Toneva2019InterpretingAI,
  title={Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)},
  author={Mariya Toneva and Leila Wehbe},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.11833}
}
  • Mariya Toneva, Leila Wehbe
  • Published 2019
  • Computer Science, Biology
  • ArXiv
  • Neural networks models for NLP are typically implemented without the explicit encoding of language rules and yet they are able to break one performance record after another. This has generated a lot of research interest in interpreting the representations learned by these networks. We propose here a novel interpretation approach that relies on the only processing system we have that does understand language: the human brain. We use brain imaging recordings of subjects reading complex natural… CONTINUE READING

    Figures, Tables, and Topics from this paper.

    Brain2Word: Decoding Brain Activity for Language Generation
    Human brain activity for machine attention
    1
    Design of BCCWJ-EEG: Balanced Corpus with Human Electroencephalography
    Attention in Psychology, Neuroscience, and Machine Learning
    3

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 49 REFERENCES
    Incorporating Context into Language Encoding Models for fMRI
    26
    Aligning context-based statistical models of language with brain activity during reading
    46
    Evaluating word embeddings with fMRI and eye-tracking
    23
    Deep contextualized word representations
    3809
    BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    8614
    AllenNLP: A Deep Semantic Natural Language Processing Platform
    418
    Universal Sentence Encoder
    460
    Rational Recurrences
    17
    Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
    563