• Corpus ID: 207863530

TENER: Adapting Transformer Encoder for Named Entity Recognition

@article{Yan2019TENERAT,
  title={TENER: Adapting Transformer Encoder for Named Entity Recognition},
  author={Hang Yan and Bocao Deng and Xiaonan Li and Xipeng Qiu},
  journal={ArXiv},
  year={2019},
  volume={abs/1911.04474}
}
The Bidirectional long short-term memory networks (BiLSTM) have been widely used as an encoder in models solving the named entity recognition (NER) task. Recently, the Transformer is broadly adopted in various Natural Language Processing (NLP) tasks owing to its parallelism and advantageous performance. Nevertheless, the performance of the Transformer in NER is not as good as it is in other NLP tasks. In this paper, we propose TENER, a NER architecture adopting adapted Transformer Encoder to… 

BiLSTM-IN-TRANS for Chinese NER

The experiments can reflect that applying the model based on BiLSTM-INTRANS could work better than applying one of LSTM or Transformer alone in the NER task.

A Residual BiLSTM Model for Named Entity Recognition

A new residual BiLSTM model is proposed and performed with a conditional random field (CRF) layer together on NER tasks to improve the performance of both Chinese and English NER effectively without introducing any external knowledge.

Improving Low-Resource Named Entity Recognition using Joint Sentence and Token Labeling

A joint model is presented that supports multi-class classification and a simple variant of self-attention that allows the model to learn scaling factors is introduced.

Hero-Gang Neural Model For Named Entity Recognition

A novel Hero-Gang Neural structure (HGN), including the Hero and Gang module, to leverage both global and local information to promote NER, which is critical in NER.

A Survey on Deep Learning for Named Entity Recognition

A comprehensive review on existing deep learning techniques for NER, including tagged NER corpora and off-the-shelf NER tools, and systematically categorizes existing works based on a taxonomy along three axes.

Enhancing Entity Boundary Detection for Better Chinese Named Entity Recognition

This paper proposes a boundary enhanced approach for better Chinese NER, and enhances the boundary information from two perspectives by taking the entity head-tail prediction as an auxiliary task.

KARL-Trans-NER: Knowledge Aware Representation Learning for Named Entity Recognition using Transformers

Experimental results show that the augmentation done using KARL can considerably boost the performance of the NER system and achieve significantly better results than existing approaches in the literature on three publicly available NER datasets, namely coNLL 2003, CoNLL++, and OntoNotes v5.

Improving Named Entity Recognition with Attentive Ensemble of Syntactic Information

This paper improves NER by leveraging different types of syntactic information through attentive ensemble, which functionalizes by the proposed key-value memory networks, syntax attention, and the gate mechanism for encoding, weighting and aggregating such syntactical information, respectively.

Semantic Label Enhanced Named Entity Recognition with Incompletely Annotated Data

A novel semantic label enhanced named entity recognition model is proposed to tackle with NER problems with incompletely annotated data.
...

References

SHOWING 1-10 OF 44 REFERENCES

CAN-NER: Convolutional Attention Network for Chinese Named Entity Recognition

A Convolutional Attention Network called CAN for Chinese NER is investigated, which consists of a character-based convolutional neural network with local-attention layer and a gated recurrent unit with global self-att attention layer to capture the information from adjacent characters and sentence contexts.

Semi-supervised sequence tagging with bidirectional language models

A general semi-supervised approach for adding pretrained context embeddings from bidirectional language models to NLP systems and apply it to sequence labeling tasks, surpassing previous systems that use other forms of transfer or joint learning with additional labeled data and task specific gazetteers.

CNN-Based Chinese NER with Lexicon Rethinking

This work proposes a faster alternative to Chinese NER: a convolutional neural network (CNN)-based method that incorporates lexicons using a rethinking mechanism that can model all the characters and potential words that match the sentence in parallel.

A Lexicon-Based Graph Neural Network for Chinese NER

A lexicon-based graph neural network with global semantics is introduced, in which lexicon knowledge is used to connect characters to capture the local composition, while a global relay node can capture global sentence semantics and long-range dependency.

Empower Sequence Labeling with Task-Aware Neural Language Model

This study develops a neural framework to extract knowledge from raw texts and empower the sequence labeling task, and leverages character-level knowledge from self-contained order information of training sequences.

Robust Lexical Features for Improved Neural Network Named-Entity Recognition

This work proposes to embed words and entity types into a low-dimensional vector space the authors train from annotated data produced by distant supervision thanks to Wikipedia, and compute a feature vector representing each word that establishes a new state-of-the-art F1 score.

GRN: Gated Relation Network to Enhance Convolutional Neural Network for Named Entity Recognition

A simple but effective CNN-based network for NER, i.e., gated relation network (GRN), which is more capable than common CNNs in capturing long-term context and can achieve state-of-the-art performance with or without external knowledge.

Five-Stroke Based CNN-BiRNN-CRF Network for Chinese Named Entity Recognition

A five-stroke based CNN-BiRNN-CRF network is proposed for Chinese named entity recognition and the convolutional neural network is used to extract n-gram features, without involving hand-crafted features or domain-specific knowledge.

Named Entity Recognition with Bidirectional LSTM-CNNs

A novel neural network architecture is presented that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering.

Named Entity Recognition with Bilingual Constraints

A method is proposed that formulates the problem of exploring signals on unannotated bilingual text as a simple Integer Linear Program, which encourages entity tags to agree via bilingual constraints and can improve strong baselines for both Chinese and English.