Understanding Learning Dynamics Of Language Models with SVCCA

@inproceedings{Saphra2019UnderstandingLD,
  title={Understanding Learning Dynamics Of Language Models with SVCCA},
  author={Naomi Saphra and A. Lopez},
  booktitle={NAACL-HLT},
  year={2019}
}
Research has shown that neural models implicitly encode linguistic features, but there has been no research showing \emph{how} these encodings arise as the models are trained. We present the first study on the learning dynamics of neural language models, using a simple and flexible analysis method called Singular Vector Canonical Correlation Analysis (SVCCA), which enables us to compare learned representations across time and across models, without the need to evaluate directly on annotated… Expand
37 Citations
Linguistic Profiling of a Neural Language Model
  • 1
  • PDF
LSTMs Compose (and Learn) Bottom-Up
  • 1
  • PDF
Word Interdependence Exposes How LSTMs Compose Representations
  • 3
  • PDF
Investigating Multilingual NMT Representations at Scale
  • 34
  • PDF
Do Neural Language Models Show Preferences for Syntactic Formalisms?
  • 11
  • PDF
Similarity Analysis of Contextual Word Representation Models
  • 9
  • PDF
Emergent linguistic structure in artificial neural networks trained by self-supervision
  • 25
  • PDF
...
1
2
3
4
...

References

SHOWING 1-10 OF 35 REFERENCES
Deep contextualized word representations
  • 5,437
  • Highly Influential
  • PDF
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
  • 451
  • Highly Influential
  • PDF
Deep RNNs Encode Soft Hierarchical Syntax
  • 66
  • PDF
Representation of Linguistic Form and Function in Recurrent Neural Networks
  • 108
  • Highly Influential
  • PDF
Encoding of phonology in a recurrent neural model of grounded speech
  • 38
  • Highly Influential
  • PDF
...
1
2
3
4
...