• Corpus ID: 229923303

Few-Shot Named Entity Recognition: A Comprehensive Study

@article{Huang2020FewShotNE,
  title={Few-Shot Named Entity Recognition: A Comprehensive Study},
  author={Jiaxin Huang and Chunyuan Li and Krishan Subudhi and Damien Jose and Shobana Balakrishnan and Weizhu Chen and Baolin Peng and Jianfeng Gao and Jiawei Han},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.14978}
}
This paper presents a comprehensive study to efficiently build named entity recognition (NER) systems when a small number of indomain labeled data is available. Based upon recent Transformer-based self-supervised pretrained language models (PLMs), we investigate three orthogonal schemes to improve the model generalization ability for few-shot settings: (1) meta-learning to construct prototypes for different entity types, (2) supervised pre-training on noisy web data to extract entity-related… 

Figures and Tables from this paper

A Prototype-Based Few-Shot Named Entity Recognition

This work proposes the ClusLoss and the ProEuroLoss aiming to enhance the model's ability in terms of aggregating semantic information spatially, thus helping the model better distinguish entity types.

CONTaiNER: Few-Shot Named Entity Recognition via Contrastive Learning

Controversial AI NER optimizes a generalized objective of differentiating between token categories based on their Gaussian-distributed embeddings, which effectively alleviates overfitting issues originating from training domains.

Few-NERD: A Few-shot Named Entity Recognition Dataset

Few-NERD is presented, a large-scale human-annotated few-shot NER dataset with a hierarchy of 8 coarse-grained and 66 fine- grained entity types and is believed to be the first few- shot NER datasets and the largest human-crafted NER data set.

Few-Shot Class-Incremental Learning for Named Entity Recognition

This work reconstructs synthetic training data of the old classes using the trained NER model, augmenting the training of new classes, and develops a framework that distills from the existing model with both synthetic data, and real data from the current training set.

Few-shot Named Entity Recognition with Self-describing Networks

Self-describing Networks (SDNet) are designed, a Seq2Seq generation model which can universally describe mentions using concepts, automatically map novel entity types to concepts, and adaptively recognize entities on-demand.

Learning from Language Description: Low-shot Named Entity Recognition via Decomposed Framework

A novel NER framework is proposed, namely SpanNER, which learns from natural language supervision and enables the identification of never-seen entity classes without using in-domain labeled data.

TOKEN is a MASK: Few-shot Named Entity Recognition with Pre-trained Language Models

A novel few-shot approach to domain adaptation in the context of Named Entity Recognition (NER) with a two-step approach consisting of a variable base module and a template module that leverages the knowledge captured in pre-trained language models with the help of simple descriptive patterns.

Continual Few-Shot Named Entity Recognition via Data-Free Distillation

  • Computer Science
  • 2021
This work reconstructs synthetic training data of the previously seen classes from the NER model and develops a framework that distills from the existing model with both synthetic data, and real data from the current training set, to alleviate the problem of catastrophic forgetting in continual few-shot learning.

Few-Shot Fine-Grained Entity Typing with Automatic Label Interpretation and Instance Generation

A novel framework for few-shot Fine-grained Entity Typing consisting of an entity type label interpretation module automatically learns to relate type labels to the vocabulary by jointly leveraging few- shot instances and the label hierarchy, and a type-based contextualized instance generator produces new instances based on given instances to enlarge the training set for better generalization.

Few-shot Named Entity Recognition with Entity-level Prototypical Network Enhanced by Dispersedly Distributed Prototypes

EP-Net builds entity-level prototypes and considers text spans to be candidate entities, so it no longer requires the label dependency and consistently outperforms the previous strong models in terms of overall performance.

References

SHOWING 1-10 OF 68 REFERENCES

Frustratingly Simple Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning

This work shows that the method of combining structured decoding with nearest neighbor learning achieves state-of-the-art performance on standard few-shot NER evaluation tasks, improving F1 scores by $6\%$ to $16$ absolute points over prior meta-learning based systems.

Few-shot classification in named entity recognition task

This work tackles Named Entity Recognition (NER) task using Prototypical Network --- a metric learning technique that learns intermediate representations of words which cluster well into named entity classes.

Named Entity Recognition with Bidirectional LSTM-CNNs

A novel neural network architecture is presented that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering.

Diverse Few-Shot Text Classification with Multiple Metrics

This work proposes an adaptive metric learning approach that automatically determines the best weighted combination from a set of metrics obtained from meta-training tasks for a newly seen few-shot task.

Induction Networks for Few-Shot Text Classification

This paper proposes a novel Induction Network to learn a generalized class-wise representation of each class in the support set, by innovatively leveraging the dynamic routing algorithm in meta-learning and finds the model is able to induce and generalize better.

FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation

Empirical results show that even the most competitive few- shot learning models struggle on this task, especially as compared with humans, and indicate that few-shot relation classification remains an open problem and still requires further research.

Self-Supervised Meta-Learning for Few-Shot Natural Language Classification Tasks

This paper proposes a self-supervised approach to generate a large, rich, meta-learning task distribution from unlabeled text, and shows that this meta-training leads to better few-shot generalization than language-model pre-training followed by finetuning.

Uncertainty-aware Self-training for Text Classification with Few Labels

This work proposes an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network leveraging recent advances in Bayesian deep learning and proposes acquisition functions to select instances from the unlabeled pool leveraging Monte Carlo (MC) Dropout.

Few-shot Slot Tagging with Collapsed Dependency Transfer and Label-enhanced Task-adaptive Projection Network

A Label-enhanced Task-Adaptive Projection Network (L-TapNet) based on the state-of-the-art few-shot classification model – TapNet is proposed, by leveraging label name semantics in representing labels.

Self-training Improves Pre-training for Natural Language Understanding

SentAugment, a data augmentation method which computes task-specific query embeddings from labeled data to retrieve sentences from a bank of billions of unlabeled sentences crawled from the web, is introduced.
...