Meta-Learning for Neural Relation Classification with Distant Supervision

@article{Li2020MetaLearningFN,
  title={Meta-Learning for Neural Relation Classification with Distant Supervision},
  author={Zhenzhen Li and Jian-Yun Nie and Benyou Wang and Pan Du and Yu-Hui Zhang and Lixin Zou and Dongsheng Li},
  journal={Proceedings of the 29th ACM International Conference on Information \& Knowledge Management},
  year={2020}
}
  • Zhenzhen LiJian-Yun Nie Dongsheng Li
  • Published 19 October 2020
  • Computer Science
  • Proceedings of the 29th ACM International Conference on Information & Knowledge Management
Distant supervision provides a means to create a large number of weakly labeled data at low cost for relation classification. However, the resulting labeled instances are very noisy, containing data with wrong labels. Many approaches have been proposed to select a subset of reliable instances for neural model training, but they still suffer from noisy labeling problem or underutilization of the weakly-labeled data. To better select more reliable training instances, we introduce a small amount… 

Figures and Tables from this paper

CETA: A Consensus Enhanced Training Approach for Denoising in Distantly Supervised Relation Extraction

A theorem for denoising and the corresponding implementation, named Consensus Enhanced Training Approach (CETA), are proposed in this paper and demonstrated that CETA significantly outperforms the previous methods and achieves new state-of-the-art results.

DaMSTF: Domain Adversarial Learning Enhanced Meta Self-Training for Domain Adaptation

  • Computer Science
  • 2022
A new self-training framework for domain adap- 015 tation, namely Domain adversarial learning en- 016 hanced Self-Training Framework (DaMSTF), which employs domain ad- 029 versarial learning as a heuristic neural net- 030 work initialization method, which can help the meta-learning module converge to a better op- 032 timal.

Few Clean Instances Help Denoising Distant Supervision

It is shown that besides getting a more convincing evaluation of models, a small clean dataset also helps to build more robust denoising models and proposes a new criterion for clean instance selection based on influence functions.

MiDTD: A Simple and Effective Distillation Framework for Distantly Supervised Relation Extraction

A simple and effective Multi-instance Dynamic Temperature Distillation (MiDTD) framework, which is model-agnostic and mainly involves two modules: multi-instance target fusion (MiTF) and dynamic temperature regulation (DTR).

ARNOR: Attention Regularization based Noise Reduction for Distant Supervision Relation Classification

This paper proposes ARNOR, a novel Attention Regularization based NOise Reduction framework for distant supervision relation classification that assumes that a trustable relation label should be explained by the neural attention model.

Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting

Synthetic and real experiments substantiate the capability of the method for achieving proper weighting functions in class imbalance and noisy label cases, fully complying with the common settings in traditional methods, and more complicated scenarios beyond conventional cases.

Looking Beyond Label Noise: Shifted Label Distribution Matters in Distantly Supervised Relation Extraction

This paper develops a simple yet effective adaptation method for DS-trained models, bias adjustment, which updates models learned over the source domain with a label distribution estimated on the target domain, which achieves consistent performance gains on DS- trained models.

DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction

An adversarial learning framework is introduced, which is named DSGAN, to learn a sentence-level true-positive generator, Inspired by Generative Adversarial Networks, that regard the positive samples generated by the generator as the negative samples to train the discriminator.

Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning

A deep reinforcement learning strategy is explored to generate the false-positive indicator, where it is argued that incorrectly-labeled candidate sentences must be treated with a hard decision, rather than being dealt with soft attention weights.

Reinforcement Learning for Relation Classification From Noisy Data

Experiment results show that the proposed novel model can deal with the noise of data effectively and obtains better performance for relation classification at the sentence level.

Learning to Reweight Examples for Robust Deep Learning

This work proposes a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions that can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.

Position-aware Attention and Supervised Data Improve Slot Filling

An effective new model is proposed, which combines an LSTM sequence model with a form of entity position-aware attention that is better suited to relation extraction that builds TACRED, a large supervised relation extraction dataset obtained via crowdsourcing and targeted towards TAC KBP relations.

Neural Relation Extraction with Selective Attention over Instances

A sentence-level attention-based model for relation extraction that employs convolutional neural networks to embed the semantics of sentences and dynamically reduce the weights of those noisy instances.

Relation Classification via Convolutional Deep Neural Network

This paper exploits a convolutional deep neural network (DNN) to extract lexical and sentence level features from the output of pre-existing natural language processing systems and significantly outperforms the state-of-the-art methods.