Revisiting Few-shot Relation Classification: Evaluation Data and Classification Schemes

  title={Revisiting Few-shot Relation Classification: Evaluation Data and Classification Schemes},
  author={O. Sabo and Yanai Elazar and Yoav Goldberg and Ido Dagan},
  journal={Transactions of the Association for Computational Linguistics},
  • O. Sabo, Yanai Elazar, Ido Dagan
  • Published 17 April 2021
  • Computer Science
  • Transactions of the Association for Computational Linguistics
We explore few-shot learning (FSL) for relation classification (RC). Focusing on the realistic scenario of FSL, in which a test instance might not belong to any of the target categories (none-of-the-above, [NOTA]), we first revisit the recent popular dataset structure for FSL, pointing out its unrealistic data distribution. To remedy this, we propose a novel methodology for deriving more realistic few-shot test data from available datasets for supervised RC, and apply it to the TACRED dataset… 

Few-Shot Document-Level Relation Extraction

This work adapts the state-of-the-art sentence-level method MNAV to the document-level and develops it further for improved domain adaptation and finds FSDLRE to be a challenging setting with interesting new characteristics such as the ability to sample NOTA instances from the support set.

From Examples to Rules: Neural Guided Rule Synthesis for Information Extraction

This work uses a transformer-based architecture to guide an enumerative search, and shows that this reduces the number of steps that need to be explored before a rule is found and achieves state-of-the-art performance on the 1-shot scenario of a task that focuses on few-shot learning for relation classification.

What Do You Mean by Relation Extraction? A Survey on Datasets and Study on Scientific Relation Classification

A comprehensive survey of RE datasets is provided, and the task definition and its adoption by the community are revisited, finding that cross-dataset and cross-domain setups are particularly lacking.

Few-Shot Document-Level Event Argument Extraction

FewDocAE, a Few - Shot Doc ument-Level Event A rgument E xtraction benchmark, based on the largest document-level event extraction dataset DocEE, is presented, closely related to practical use under low- resource regimes.

Do Transformer Networks Improve the Discovery of Rules from Text?

A novel take on the DIRT algorithm, where the distributional hypothesis is implemented using the contextualized embeddings provided by BERT, a transformer-network-based language model, and this yields rules that outperform the original algorithm in the question answering-based evaluation proposed by Lin and Pantel (2001.

A Dataset for N-ary Relation Extraction of Drug Combinations

An expert-annotated dataset for extracting information about the efficacy of drug combinations from the scientific literature, and presents a unique NLP challenge, as the first relation extraction dataset consisting of variable-length relations.

AIFB-WebScience at SemEval-2022 Task 12: Relation Extraction First - Using Relation Extraction to Identify Entities

An end-to-end joint entity and relation extraction approach based on transformer-based language models that incorporates information from relation extraction into entity extraction, which means that the system can be trained even on data sets where only a subset of all valid entity spans is annotated.

Commonsense Knowledge-Aware Prompt Tuning for Few-Shot NOTA Relation Classification

The commonsense knowledge-aware prompt tuning (CKPT) method is proposed, a simple and effective prompt-learning method is developed by constructing relation-oriented templates, which can further stimulate the rich knowledge distributed in PLMs to better serve downstream tasks.

PILED: An Identify-and-Localize Framework for Few-Shot Event Detection

This study employs cloze prompts to elicit eventrelated knowledge from pretrained language models and uses event definitions and keywords to pinpoint the trigger word, and formulating the event detection task as an identifythen-localize procedure, enabling the model to quickly adapt to event detection tasks for new types.



BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.

FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation

Empirical results show that even the most competitive few- shot learning models struggle on this task, especially as compared with humans, and indicate that few-shot relation classification remains an open problem and still requires further research.

Optimization as a Model for Few-Shot Learning

Position-aware Attention and Supervised Data Improve Slot Filling

An effective new model is proposed, which combines an LSTM sequence model with a form of entity position-aware attention that is better suited to relation extraction that builds TACRED, a large supervised relation extraction dataset obtained via crowdsourcing and targeted towards TAC KBP relations.

Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation

The core idea is to use feature-wise transformation layers for augmenting the image features using affine transforms to simulate various feature distributions under different domains in the training stage, and applies a learning-to-learn approach to search for the hyper-parameters of the feature- wise transformation layers.

FewRel 2.0: Towards More Challenging Few-Shot Relation Classification

It is found that the state-of-the-art few-shot relation classification models struggle on these two aspects, and that the commonly-used techniques for domain adaptation and NOTA detection still cannot handle the two challenges well.

Out-of-Domain Detection for Low-Resource Text Classification Tasks

Evaluations on real-world datasets show that the proposed solution outperforms state-of-the-art methods in zero-shot OOD detection task, while maintaining a competitive performance on ID classification task.

Few-shot Text Classification with Distributional Signatures

This paper demonstrates that this model consistently outperforms prototypical networks learned on lexical knowledge in both few-shot text classification and relation classification by a significant margin across six benchmark datasets.

Model-Agnostic Meta-Learning for Relation Classification with Limited Supervision

A model-agnostic meta-learning protocol is proposed for training relation classifiers to achieve enhanced predictive performance in limited supervision settings and is demonstrated to improve the predictive performance of two state-of-the-art supervised relation classification models.

DocRED: A Large-Scale Document-Level Relation Extraction Dataset

Empirical results show that DocRED is challenging for existing RE methods, which indicates that document-level RE remains an open problem and requires further efforts.