Asking without Telling: Exploring Latent Ontologies in Contextual Representations

@article{Michael2020AskingWT,
  title={Asking without Telling: Exploring Latent Ontologies in Contextual Representations},
  author={Julian Michael and Jan A. Botha and Ian Tenney},
  journal={ArXiv},
  year={2020},
  volume={abs/2004.14513}
}
  • Julian Michael, Jan A. Botha, Ian Tenney
  • Published 2020
  • Computer Science
  • ArXiv
  • The success of pretrained contextual encoders, such as ELMo and BERT, has brought a great deal of interest in what these models learn: do they, without explicit supervision, learn to encode meaningful notions of linguistic structure? If so, how is this structure encoded? To investigate this, we introduce latent subclass learning (LSL): a modification to existing classifier-based probing methods that induces a latent categorization (or ontology) of the probe's inputs. Without access to fine… CONTINUE READING
    2 Citations
    Finding Universal Grammatical Relations in Multilingual BERT
    • 15
    • PDF
    How Far Does BERT Look At: Distance-based Clustering and Analysis of BERT's Attention
    • PDF

    References

    SHOWING 1-10 OF 91 REFERENCES
    Language Models as Knowledge Bases?
    • 215
    • PDF
    What do you learn from context? Probing for sentence structure in contextualized word representations
    • 250
    • PDF
    Dissecting Contextual Word Embeddings: Architecture and Representation
    • 175
    • Highly Influential
    • PDF
    Linguistic Knowledge and Transferability of Contextual Representations
    • 227
    • Highly Influential
    • PDF
    Annotation Artifacts in Natural Language Inference Data
    • 367
    • PDF
    Deep Semantic Role Labeling: What Works and What's Next
    • 297
    • PDF
    What Does BERT Look At? An Analysis of BERT's Attention
    • 342
    • Highly Influential
    • PDF
    Information-Theoretic Probing for Linguistic Structure
    • 31
    • PDF