Latent Relation Language Models

@inproceedings{Hayashi2020LatentRL,
  title={Latent Relation Language Models},
  author={H. Hayashi and Zecong Hu and Chenyan Xiong and Graham Neubig},
  booktitle={AAAI},
  year={2020}
}
  • H. Hayashi, Zecong Hu, +1 author Graham Neubig
  • Published in AAAI 2020
  • Computer Science
  • In this paper, we propose Latent Relation Language Models (LRLMs), a class of language models that parameterizes the joint distribution over the words in a document and the entities that occur therein via knowledge graph relations. [...] Key Result Qualitative analysis further demonstrates the proposed model's ability to learn to predict appropriate relations in context.Expand Abstract

    Figures, Tables, and Topics from this paper.

    Pre-trained Models for Natural Language Processing: A Survey
    13
    How Can We Know What Language Models Know?
    16
    Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training
    7
    On Importance Sampling-Based Evaluation of Latent Language Models
    HittER: Hierarchical Transformers for Knowledge Graph Embeddings
    Knowledge-Aware Language Model Pretraining

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 62 REFERENCES
    Neural Lattice Language Models
    14
    Reference-Aware Language Models
    55
    Building Language Models for Text with Named Entities
    12
    A Neural Knowledge Language Model
    88
    Language as a Latent Variable: Discrete Generative Models for Sentence Compression
    134
    Breaking the Softmax Bottleneck: A High-Rank RNN Language Model
    203
    Deep contextualized word representations
    3801
    Language Models are Unsupervised Multitask Learners
    1476
    Deep Joint Entity Disambiguation with Local Neural Attention
    115