Zero-Shot Relation Extraction via Reading Comprehension

@article{Levy2017ZeroShotRE,
  title={Zero-Shot Relation Extraction via Reading Comprehension},
  author={Omer Levy and Minjoon Seo and Eunsol Choi and Luke Zettlemoyer},
  journal={ArXiv},
  year={2017},
  volume={abs/1706.04115}
}
We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. [...] Key Result Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task.Expand
Probing and Fine-tuning Reading Comprehension Models for Few-shot Event Extraction
Generalizing Question Answering System with Pre-trained Language Model Fine-tuning
Neural architectures for open-type relation argument extraction
Parasitic Neural Network for Zero-Shot Relation Extraction
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 23 REFERENCES
Learning Recurrent Span Representations for Extractive Question Answering
SQuAD: 100, 000+ Questions for Machine Comprehension of Text
Bidirectional Attention Flow for Machine Comprehension
WikiReading: A Novel Large-scale Language Understanding Task over Wikipedia
Large-scale Simple Question Answering with Memory Networks
Question-Answer Driven Semantic Role Labeling: Using Natural Language to Annotate Natural Language
Effective Crowd Annotation for Relation Extraction
Multi-Perspective Context Matching for Machine Comprehension
Injecting Logical Background Knowledge into Embeddings for Relation Extraction
...
1
2
3
...