Corpus ID: 221507798

KILT: a Benchmark for Knowledge Intensive Language Tasks

@article{Petroni2020KILTAB,
  title={KILT: a Benchmark for Knowledge Intensive Language Tasks},
  author={F. Petroni and Aleksandra Piktus and A. Fan and Patrick Lewis and Majid Yazdani and Nicola De Cao and J. Thorne and Yacine Jernite and Vassilis Plachouras and Tim Rocktaschel and Sebastian Riedel},
  journal={ArXiv},
  year={2020},
  volume={abs/2009.02252}
}
  • F. Petroni, Aleksandra Piktus, +8 authors Sebastian Riedel
  • Published 2020
  • Computer Science
  • ArXiv
  • Challenging problems such as open-domain question answering, fact checking, slot filling and entity linking require access to large, external knowledge sources. While some models do well on individual tasks, developing general models is difficult as each task might require computationally expensive indexing of custom knowledge sources, in addition to dedicated infrastructure. To catalyze research on models that condition on specific information in large textual resources, we present a benchmark… CONTINUE READING
    Autoregressive Entity Retrieval
    • 2
    • PDF
    Neural Databases
    Bootleg: Chasing the Tail with Self-Supervised Named Entity Disambiguation

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 57 REFERENCES
    Semantic Parsing on Freebase from Question-Answer Pairs
    • 956
    • PDF
    The Web as a Knowledge-base for Answering Complex Questions
    • 114
    • PDF
    Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model
    • 23
    • PDF
    BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    • 10,289
    • PDF
    HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
    • 283
    • PDF
    Natural Questions: A Benchmark for Question Answering Research
    • 230
    • Highly Influential
    • PDF
    Break It Down: A Question Understanding Benchmark
    • 15
    • PDF
    Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge
    • 139
    • PDF
    ELI5: Long Form Question Answering
    • 29
    • PDF