De-biasing Distantly Supervised Named Entity Recognition via Causal Intervention

@article{Zhang2021DebiasingDS,
  title={De-biasing Distantly Supervised Named Entity Recognition via Causal Intervention},
  author={Wenkai Zhang and Hongyu Lin and Xianpei Han and Le Sun},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.09233}
}
Distant supervision tackles the data bottleneck in NER by automatically generating training instances via dictionary matching. Unfortunately, the learning of DS-NER is severely dictionary-biased, which suffers from spurious correlations and therefore undermines the effectiveness and the robustness of the learned models. In this paper, we fundamentally explain the dictionary bias via a Structural Causal Model (SCM), categorize the bias into intra-dictionary and inter-dictionary biases, and… Expand
2 Citations

Figures and Tables from this paper

Fine-grained Entity Typing via Label Reasoning
  • Qing Liu, Hongyu Lin, Xinyan Xiao, Xianpei Han, Le Sun, Hua Wu
  • Computer Science
  • 2021
Conventional entity typing approaches are based on independent classification paradigms, which make them difficult to recognize interdependent, long-tailed and fine-grained entity types. In thisExpand
Honey or Poison? Solving the Trigger Curse in Few-shot Event Detection via Causal Intervention
  • Jiawei Chen, Hongyu Lin, Xianpei Han, Le Sun
  • Computer Science
  • 2021
Event detection has long been troubled by the trigger curse: overfitting the trigger will harm the generalization ability while underfitting it will hurt the detection performance. This problem isExpand

References

SHOWING 1-10 OF 35 REFERENCES
Learning Named Entity Tagger using Domain-Specific Dictionary
TLDR
After identifying the nature of noisy labels in distant supervision, a novel, more effective neural model AutoNER is proposed with a new Tie or Break scheme and how to refine distant supervision for better NER performance is discussed. Expand
BOND: BERT-Assisted Open-Domain Named Entity Recognition with Distant Supervision
TLDR
A new computational framework -- BOND, which leverages the power of pre-trained language models to improve the prediction performance of NER models and demonstrates the superiority of BOND over existing distantly supervised NER methods. Expand
Denoising Distantly Supervised Named Entity Recognition via a Hypergeometric Probabilistic Model
TLDR
Hypergeometric Learning is proposed, a denoising algorithm for distantly supervised NER that takes both noise distribution and instance-level confidence into consideration and can effectively denoise the weaklylabeled data retrieved from distant supervision, and therefore results in significant improvements on the trained models. Expand
Distantly Supervised Named Entity Recognition using Positive-Unlabeled Learning
TLDR
A novel PU learning algorithm is proposed to perform named entity recognition using only unlabeled data and named entity dictionaries that greatly reduces the requirement on the quality of the dictionaries and makes the method generalize well with quite simple dictionaries. Expand
Distantly Supervised NER with Partial Annotation Learning and Reinforcement Learning
TLDR
This paper proposes a novel approach which can partially solve the above problems of distant supervision for NER, and applies partial annotation learning to reduce the effect of unknown labels of characters in incomplete and noisy annotations. Expand
Neural Relation Extraction with Selective Attention over Instances
TLDR
A sentence-level attention-based model for relation extraction that employs convolutional neural networks to embed the semantics of sentences and dynamically reduce the weights of those noisy instances. Expand
Representation Learning via Invariant Causal Mechanisms
TLDR
A novel self-supervised objective, Representation Learning via Invariant Causal Mechanisms (ReLIC), is proposed that enforces invariant prediction of proxy targets across augmentations through an invariance regularizer which yields improved generalization guarantees. Expand
Sequence-to-Nuggets: Nested Entity Mention Detection via Anchor-Region Networks
TLDR
This paper proposes Anchor-Region Networks (ARNs), a sequence-to-nuggets architecture for nested mention detection which first identifies anchor words of all mentions, and then recognizes the mention boundaries for each anchor word by exploiting regular phrase structures. Expand
Low-Resource Name Tagging Learned with Weakly Labeled Data
TLDR
A novel neural model for name tagging solely based on weakly labeled (WL) data is proposed, so that it can be applied in any low-resource settings and demonstrate superior performance as well as efficiency. Expand
Distant supervision for relation extraction without labeled data
TLDR
This work investigates an alternative paradigm that does not require labeled corpora, avoiding the domain dependence of ACE-style algorithms, and allowing the use of corpora of any size. Expand
...
1
2
3
4
...