TIER-A: Denoising Learning Framework for Information Extraction
@article{Li2022TIERADL, title={TIER-A: Denoising Learning Framework for Information Extraction}, author={Yongkang Li and Ming Zhang}, journal={ArXiv}, year={2022}, volume={abs/2211.11527} }
With the development of deep neural language models, great progress has been made in information extraction recently. However, deep learning models often overfit on noisy data points, leading to poor performance. In this work, we examine the role of information entropy in the overfitting process and draw a key insight that overfitting is a process of overconfidence and entropy decreasing. Motivated by such properties, we propose a simple yet effective co-regularization joint-training framework…
References
SHOWING 1-10 OF 41 REFERENCES
Learning from Noisy Labels for Entity-Centric Information Extraction
- Computer ScienceEMNLP
- 2021
A simple co-regularization framework for entity-centric information extraction is proposed, which consists of several neural models with identical structures but different parameter initialization, which prevents overfitting on noisy labels.
Training deep neural-networks using a noise adaptation layer
- Computer ScienceICLR
- 2017
This study presents a neural-network approach that optimizes the same likelihood function as optimized by the EM algorithm but extended to the case where the noisy labels are dependent on the features in addition to the correct labels.
Combating Noisy Labels by Agreement: A Joint Training Method with Co-Regularization
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
A robust learning paradigm called JoCoR is proposed, which aims to reduce the diversity of two networks during training by using two networks to make predictions on the same mini-batch data and calculate a joint loss with Co-Regularization for each training example.
Symmetric Cross Entropy for Robust Learning With Noisy Labels
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
The proposed Symmetric cross entropy Learning (SL) approach simultaneously addresses both the under learning and overfitting problem of CE in the presence of noisy labels, and empirically shows that SL outperforms state-of-the-art methods.
How does Disagreement Help Generalization against Label Corruption?
- Computer ScienceICML
- 2019
A robust learning paradigm called Co-teaching+, which bridges the "Update by Disagreement" strategy with the original Co-Teaching, which is much superior to many state-of-the-art methods in the robustness of trained models.
On Calibration of Modern Neural Networks
- Computer ScienceICML
- 2017
It is discovered that modern neural networks, unlike those from a decade ago, are poorly calibrated, and on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning
- Computer ScienceACL
- 2018
A deep reinforcement learning strategy is explored to generate the false-positive indicator, where it is argued that incorrectly-labeled candidate sentences must be treated with a hard decision, rather than being dealt with soft attention weights.
MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels
- Computer ScienceICML
- 2018
Experimental results demonstrate that the proposed novel technique of learning another neural network, called MentorNet, to supervise the training of the base deep networks, namely, StudentNet, can significantly improve the generalization performance of deep networks trained on corrupted training data.
Training Convolutional Networks with Noisy Labels
- Computer ScienceICLR 2014
- 2014
An extra noise layer is introduced into the network which adapts the network outputs to match the noisy label distribution and can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks.
Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks
- Computer ScienceEMNLP
- 2015
This paper proposes a novel model dubbed the Piecewise Convolutional Neural Networks (PCNNs) with multi-instance learning to address the problem of wrong label problem when using distant supervision for relation extraction and adopts convolutional architecture with piecewise max pooling to automatically learn relevant features.