Entity, Relation, and Event Extraction with Contextualized Span Representations

@article{Wadden2019EntityRA,
  title={Entity, Relation, and Event Extraction with Contextualized Span Representations},
  author={David Wadden and Ulme Wennberg and Yi Luan and Hannaneh Hajishirzi},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.03546}
}
We examine the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction. [] Key Method We perform experiments comparing different techniques to construct span representations. Contextualized embeddings like BERT perform well at capturing relationships among entities in the same or adjacent sentences, while dynamic span graph updates model long-range cross-sentence relationships. For instance, propagating span…

Figures and Tables from this paper

A Frustratingly Easy Approach for Entity and Relation Extraction
TLDR
This work presents a simple pipelined approach for entity and relation extraction, and establishes the new state-of-the-art on standard benchmarks, obtaining a 1.7%-2.8% absolute improvement in relation F1 over previous joint models with the same pre-trained encoders.
A Frustratingly Easy Approach for Joint Entity and Relation Extraction
TLDR
This work describes a very simple approach for joint entity and relation extraction, and establishes the new state-of-the-art on standard benchmarks (ACE04, ACE05, and SciERC).
Span-based Joint Entity and Relation Extraction with Attention-based Span-specific and Contextual Semantic Representations
TLDR
This work introduces a span-based joint extraction framework with attention-based semantic representations that outperforms previous systems and achieves state-of-the-art results on ACE2005, CoNLL2004 and ADE.
An End-to-end Model for Entity-level Relation Extraction using Multi-instance Learning
TLDR
A multi-task approach is followed that builds upon coreference resolution and gathers relevant signals via multi-instance learning with multi-level representations combining global entity and local mention information to achieve state-of-the-art relation extraction results on the DocRED dataset.
A Trigger-Sense Memory Flow Framework for Joint Entity and Relation Extraction
TLDR
A Trigger-Sense Memory Flow Framework (TriMF) is presented, which builds a memory module to remember category representations learned in entity recognition and relation extraction tasks and designs a multi-level memory flow attention mechanism to enhance the bi-directional interaction between entity recognitionand relation extraction.
UniRE: A Unified Label Space for Entity Relation Extraction
TLDR
A unified classifier is applied to predict each cell’s label, which unifies the learning of two sub-tasks and achieves competitive accuracy with the best extractor, and is faster.
Modeling Task Interactions in Document-Level Joint Entity and Relation Extraction
TLDR
This work addresses the two-way interaction between COREF and RE that has not been the focus by previous work, and proposes to introduce explicit interaction namely Graph Compatibility (GC) that is specifically designed to leverage task characteristics.
Injecting Knowledge Base Information into End-to-End Joint Entity and Relation Extraction and Coreference Resolution
TLDR
This work studies how to inject information from a knowledge base (KB) in a joint information extraction (IE) model, based on unsupervised entity linking, and reveals the advantage of using the attention-based approach.
Document-Level Event Role Filler Extraction using Multi-Granularity Contextualized Encoding
TLDR
This work proposes a novel multi-granularity reader to dynamically aggregate information captured by neural representations learned at different levels of granularity, and evaluates the models on the MUC-4 event extraction dataset, showing that the best system performs substantially better than prior work.
ENPAR:Enhancing Entity and Entity Pair Representations for Joint Entity Relation Extraction
TLDR
This work devise four novel objectives,i.e., masked entity typing, masked entity prediction, adversarial context discrimination, and permutation prediction, to pre-train an entity encoder and an entity pair encoder to improve the joint extraction performance.
...
...

References

SHOWING 1-10 OF 28 REFERENCES
A general framework for information extraction using dynamic span graphs
TLDR
This framework significantly outperforms state-of-the-art on multiple information extraction tasks across multiple datasets reflecting different domains and is good at detecting nested span entities, with significant F1 score improvement on the ACE dataset.
Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction
TLDR
The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links and supports construction of a scientific knowledge graph, which is used to analyze information in scientific literature.
Joint Extraction of Events and Entities within a Document Context
TLDR
This paper proposes a novel approach that models the dependencies among variables of events, entities, and their relations, and performs joint inference of these variables across a document to enable access to document-level contextual information and facilitate context-aware predictions.
Incremental Joint Extraction of Entity Mentions and Relations
TLDR
An incremental joint framework to simultaneously extract entity mentions and relations using structured perceptron with efficient beam-search is presented, which significantly outperforms a strong pipelined baseline, which attains better performance than the best-reported end-to-end system.
Cross-Sentence N-ary Relation Extraction with Graph LSTMs
TLDR
A general relation extraction framework based on graph long short-term memory networks (graph LSTMs) that can be easily extended to cross-sentence n-ary relation extraction is explored, demonstrating its effectiveness with both conventional supervised learning and distant supervision.
Jointly Extracting Event Triggers and Arguments by Dependency-Bridge RNN and Tensor-Based Argument Interaction
TLDR
A novel dependency bridge recurrent neural network (dbRNN) is proposed that simultaneously applying tree structure and sequence structure in RNN brings much better performance than only uses sequential RNN.
A Walk-based Model on Entity Graphs for Relation Extraction
TLDR
A novel graph-based neural network model for relation extraction that treats multiple pairs in a sentence simultaneously and considers interactions among them and achieves performance comparable to the state-of-the-art systems on the ACE 2005 dataset without using any external tools.
One for All: Neural Joint Modeling of Entities and Events
TLDR
This work proposes a novel model to jointly perform predictions for entity mentions, event triggers and arguments based on the shared hidden representations from deep learning, leading to the state-of-the-art performance for event extraction.
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction
TLDR
An extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel is proposed, and a novel pruning strategy is applied to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold.
CoNLL-2012 Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes
TLDR
The OntoNotes annotation (coreference and other layers) is described and the parameters of the shared task including the format, pre-processing information, evaluation criteria, and presents and discusses the results achieved by the participating systems.
...
...