Linguistic Knowledge and Transferability of Contextual Representations

@inproceedings{Liu2019LinguisticKA,
  title={Linguistic Knowledge and Transferability of Contextual Representations},
  author={Nelson F. Liu and Matt Gardner and Yonatan Belinkov and Matthew E. Peters and Noah A. Smith},
  booktitle={NAACL-HLT},
  year={2019}
}
Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of language. [...] Key Result However, language model pretraining on more data gives the best results.Expand Abstract

Citations

Publications citing this paper.
SHOWING 1-10 OF 63 CITATIONS

Designing and Interpreting Probes with Control Tasks

VIEW 13 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems

VIEW 5 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

AdaBERT: Task-Adaptive BERT Compression with Differentiable Neural Architecture Search

VIEW 6 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

Analyzing the Structure of Attention in a Transformer Language Model

VIEW 4 EXCERPTS
CITES METHODS, RESULTS & BACKGROUND

EntEval: A Holistic Evaluation Benchmark for Entity Representations

VIEW 4 EXCERPTS
CITES BACKGROUND
HIGHLY INFLUENCED

Evaluating Commonsense in Pre-trained Language Models

VIEW 4 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

Investigating Meta-Learning Algorithms for Low-Resource Natural Language Understanding Tasks

VIEW 7 EXCERPTS
CITES BACKGROUND, METHODS & RESULTS
HIGHLY INFLUENCED

FILTER CITATIONS BY YEAR

2019
2020

CITATION STATISTICS

  • 11 Highly Influenced Citations

References

Publications referenced by this paper.
SHOWING 1-10 OF 71 REFERENCES

Deep contextualized word representations

VIEW 9 EXCERPTS

Improving Language Understanding by Generative Pre-Training

VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL