What do you learn from context? Probing for sentence structure in contextualized word representations

@article{Tenney2019WhatDY,
  title={What do you learn from context? Probing for sentence structure in contextualized word representations},
  author={Ian Tenney and Patrick Xia and Berlin Chen and Alex Wang and Adam Poliak and R. Thomas McCoy and Najoung Kim and Benjamin Van Durme and Samuel R. Bowman and Dipanjan Das and Ellie Pavlick},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.06316}
}
Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode… CONTINUE READING

Figures, Tables, Results, and Topics from this paper.

Key Quantitative Results

  • In particular, BERT-large improves on ELMo by 7.4 F1 points on OntoNotes coreference, more than a 40% reduction in error and nearly as high as the improvement of the ELMo encoder over its lexical baseline.

Citations

Publications citing this paper.
SHOWING 1-10 OF 53 CITATIONS

BERT Rediscovers the Classical NLP Pipeline

VIEW 8 EXCERPTS
CITES RESULTS, BACKGROUND & METHODS

How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer Representations

Betty van Aken, Benjamin Winter, Alexander Löser, Felix A. Gers
  • ArXiv
  • 2019
VIEW 6 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

Designing and Interpreting Probes with Control Tasks

  • IJCNLP 2019
  • 2019
VIEW 10 EXCERPTS
CITES BACKGROUND
HIGHLY INFLUENCED

Do Attention Heads in BERT Track Syntactic Dependencies ?

  • 2019
VIEW 3 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

Visualizing and Understanding the Effectiveness of BERT

  • IJCNLP 2019
  • 2019
VIEW 5 EXCERPTS
CITES BACKGROUND
HIGHLY INFLUENCED

FILTER CITATIONS BY YEAR

2018
2019

CITATION STATISTICS

  • 5 Highly Influenced Citations

  • Averaged 26 Citations per year from 2018 through 2019

References

Publications referenced by this paper.
SHOWING 1-10 OF 55 REFERENCES

Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books

  • 2015 IEEE International Conference on Computer Vision (ICCV)
  • 2015
VIEW 8 EXCERPTS
HIGHLY INFLUENTIAL

Deep contextualized word representations

VIEW 13 EXCERPTS
HIGHLY INFLUENTIAL

Attention Is All You Need

VIEW 8 EXCERPTS
HIGHLY INFLUENTIAL