A Neural Edge-Editing Approach for Document-Level Relation Graph Extraction

@inproceedings{Makino2021ANE,
  title={A Neural Edge-Editing Approach for Document-Level Relation Graph Extraction},
  author={Kohei Makino and Makoto Miwa and Yutaka Sasaki},
  booktitle={FINDINGS},
  year={2021}
}
In this paper, we propose a novel edge-editing approach to extract relation information from a document. We treat the relations in a document as a relation graph among entities in this approach. The relation graph is iteratively constructed by editing edges of an initial graph, which might be a graph extracted by another system or an empty graph. The way to edit edges is to classify them in a close-first manner using the document and temporallyconstructed graph information; each edge is… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 29 REFERENCES

Reasoning with Latent Structure Refinement for Document-Level Relation Extraction

TLDR
This work proposes a novel model that empowers the relational reasoning across sentences by automatically inducing the latent document-level graph and develops a refinement strategy, which enables the model to incrementally aggregate relevant information for multi-hop reasoning.

Flow Graph Corpus from Recipe Texts

TLDR
This paper presents an attempt at annotating procedural texts with a flow graph as a representation of understanding, focusing on cooking recipe, and details the annotation framework and some statistics on the corpus.

Graph Convolution over Pruned Dependency Trees Improves Relation Extraction

TLDR
An extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel is proposed, and a novel pruning strategy is applied to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold.

HIN: Hierarchical Inference Network for Document-Level Relation Extraction

TLDR
A Hierarchical Inference Network (HIN) is proposed to make full use of the abundant information from entity level, sentence level and document level to effectively aggregate inference information from these three different granularities.

Distant Supervision for Relation Extraction beyond the Sentence Boundary

TLDR
This paper proposes the first approach for applying distant supervision to cross-sentence relation extraction with a graph representation that can incorporate both standard dependencies and discourse relations, thus providing a unifying way to model relations within and across sentences.

Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction

TLDR
A model which simultaneously predicts relationships between all mention pairs in a document and a new dataset an order of magnitude larger than existing human-annotated biological information extraction datasets and more accurate than distantly supervised alternatives is proposed.

Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling

TLDR
This paper proposes two novel techniques, adaptive thresholding and localized context pooling, to solve the multi-label and multi-entity problems in document-level relation extraction.

End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures

TLDR
A novel end-to-end neural model to extract entities and relations between them and compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8).

Modeling Relational Data with Graph Convolutional Networks

TLDR
It is shown that factorization models for link prediction such as DistMult can be significantly improved through the use of an R-GCN encoder model to accumulate evidence over multiple inference steps in the graph, demonstrating a large improvement of 29.8% on FB15k-237 over a decoder-only baseline.

Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers

TLDR
This work focuses on the task of multiple relation extractions by encoding the paragraph only once, and builds the solution upon the pre-trained self-attentive models (Transformer), where it is shown that the approach is not only scalable but can also perform state-of-the-art on the standard benchmark ACE 2005.