• Corpus ID: 219176743

Benchmarking BioRelEx for Entity Tagging and Relation Extraction

  title={Benchmarking BioRelEx for Entity Tagging and Relation Extraction},
  author={Abhinav Bhatt and Kaustubh D. Dhole},
Extracting relationships and interactions between different biological entities is still an extremely challenging problem but has not received much attention as much as extraction in other generic domains. In addition to the lack of annotated data, low benchmarking is still a major reason for slow progress. In order to fill this gap, we compare multiple existing entity and relation extraction models over a recently introduced public dataset, BioRelEx of sentences annotated with biological… 
1 Citations

Figures and Tables from this paper

Joint Biomedical Entity and Relation Extraction with Knowledge-Enhanced Collective Inference

KECI takes a collective approach to link mention spans to entities by integrating global relational information into local representations using graph convolutional networks and fuses the initial span graph and the knowledge graph into a more refined graph using an attention mechanism.



Joint Extraction of Entities and Relations Based on a Novel Tagging Scheme

A novel tagging scheme is proposed that can convert the joint extraction task to a tagging problem, and different end-to-end models are studied to extract entities and their relations directly, without identifying entities and relations separately.

Entity, Relation, and Event Extraction with Contextualized Span Representations

This work examines the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction (called DyGIE++) and achieves state-of-the-art results across all tasks.

Jointly Identifying Entities and Extracting Relations in Encyclopedia Text via A Graphical Model Approach

A joint discriminative probabilistic model with arbitrary graphical structure to optimize all relevant subtasks simultaneously and a new inference method, namely collective iterative classification (CIC), to find the most likely assignments for both entities and relations are proposed.

BioRelEx 1.0: Biological Relation Extraction Benchmark

This paper introduces BioRelEx, a new dataset of fully annotated sentences from biomedical literature that capture binding interactions between proteins and/or biomolecules and defines a precise and transparent evaluation process, tools for error analysis and significance tests.

Incremental Joint Extraction of Entity Mentions and Relations

An incremental joint framework to simultaneously extract entity mentions and relations using structured perceptron with efficient beam-search is presented, which significantly outperforms a strong pipelined baseline, which attains better performance than the best-reported end-to-end system.

Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction

The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links and supports construction of a scientific knowledge graph, which is used to analyze information in scientific literature.

GraphRel: Modeling Text as Relational Graphs for Joint Entity and Relation Extraction

GraphRel, an end-to-end relation extraction model which uses graph convolutional networks (GCNs) to jointly learn named entities and relations, outperforms previous work by 3.2% and 5.8% and achieves a new state-of-the-art for relation extraction.

A general framework for information extraction using dynamic span graphs

This framework significantly outperforms state-of-the-art on multiple information extraction tasks across multiple datasets reflecting different domains and is good at detecting nested span entities, with significant F1 score improvement on the ACE dataset.

Modeling Joint Entity and Relation Extraction with Table Representation

The experimental results demonstrate that a joint learning approach significantly outperforms a pipeline approach by incorporating global features and by selecting appropriate learning methods and search orders.

SciBERT: A Pretrained Language Model for Scientific Text

SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks and demonstrates statistically significant improvements over BERT.