Learning to Learn and Predict: A Meta-Learning Approach for Multi-Label Classification

@inproceedings{Wu2019LearningTL,
  title={Learning to Learn and Predict: A Meta-Learning Approach for Multi-Label Classification},
  author={Jiawei Wu and Wenhan Xiong and William Yang Wang},
  booktitle={Conference on Empirical Methods in Natural Language Processing},
  year={2019}
}
Many tasks in natural language processing can be viewed as multi-label classification problems. However, most of the existing models are trained with the standard cross-entropy loss function and use a fixed prediction policy (e.g., a threshold of 0.5) for all the labels, which completely ignores the complexity and dependencies among different labels. In this paper, we propose a meta-learning method to capture these complex label dependencies. More specifically, our method utilizes a meta… 

Figures and Tables from this paper

MetaSLRCL: A Self-Adaptive Learning Rate and Curriculum Learning Based Framework for Few-Shot Text Classification

A novel meta learning framework to obtain different learning rates for different tasks and neural network layers so as to enable the learner to quickly adapt to new training data and a task-oriented curriculum learning mechanism to help the meta learner achieve a better generalization ability by learning from different tasks with increasing difficulties.

A flexible class of dependence-aware multi-label loss functions

A class of loss functions that are able to capture the important aspect of label dependence are introduced, using the mathematical framework of non-additive measures and integrals.

A flexible class of dependence-aware multi-label loss functions

A class of loss functions that are able to capture the important aspect of label dependence are introduced, using the mathematical framework of non-additive measures and integrals.

Improving Pretrained Models for Zero-shot Multi-label Text Classification through Reinforced Label Hierarchy Reasoning

A Reinforced Label Hierarchy Reasoning (RLHR) approach to encourage interdependence among labels in the hierarchies during training is proposed and a rollback algorithm is designed that can remove logical errors from predictions during inference.

Meta Learning and Its Applications to Natural Language Processing

This tutorial intends to facilitate researchers in the NLP community to understand the new technology better and promote more research studies using this new technology, Meta-learning, which is one of the most important new techniques in machine learning in recent years.

kFolden: k-Fold Ensemble for Out-Of-Distribution Detection

This work proposes a simple yet effective framework kFolden, which mimics the behaviors of OOD detection during training without the use of any external data, and develops benchmarks for Ood detection using existing text classification datasets.

Meta Learning for Natural Language Processing: A Survey

The goal with this survey paper is to offer researchers pointers to relevant meta-learning works in NLP and attract more attention from the NLP community to drive future innovation.

A Label Dependence-aware Sequence Generation Model for Multi-level Implicit Discourse Relation Recognition

This paper considers multi-level IDRR as a conditional label sequence generation task and proposes a Label Dependence-aware Sequence Generation Model (LDSGM) for it, which designs a label attentive encoder to learn the global representation of an input instance and its level-specific contexts.

Learning to Learn to Disambiguate: Meta-Learning for Few-Shot Word Sense Disambiguation

This paper proposes a meta-learning framework for few-shot word sense disambiguation (WSD), where the goal is to learn todisambiguate unseen words from only a few labeled instances.

MATCH: Metadata-Aware Text Classification in A Large Hierarchy

This paper presents the MATCH1 solution—an end-to-end framework that leverages both metadata and hierarchy information, and proposes different ways to regularize the parameters and output probability of each child label by its parents.

References

SHOWING 1-10 OF 49 REFERENCES

Maximizing Subset Accuracy with Recurrent Neural Networks in Multi-label Classification

This paper replaces classifier chains with recurrent neural networks, a sequence-to-sequence prediction algorithm which has recently been successfully applied to sequential prediction tasks in many domains, and compares different ways of ordering the label set, and gives some recommendations on suitable ordering strategies.

Multi-label Text Categorization with Joint Learning Predictions-as-Features Method

A novel joint learning algorithm is proposed that allows the feedbacks to be propagated from the classifiers for latter labels to the classifier for the current label, and the predictions-as-features models trained by the algorithm outperform the original models.

ML-KNN: A lazy learning approach to multi-label learning

Reinforced Co-Training

This approach uses Q-learning to learn a data selection policy with a small labeled dataset, and then exploits this policy to train the co-training classifiers automatically, and can obtain more accurate text classification results.

Fine-Grained Entity Type Classification by Jointly Learning Representations and Label Embeddings

This work proposes a neural network model that jointly learns entity mentions and their context representation to eliminate use of hand crafted features and outperforms previous state-of-the-art methods on two publicly available datasets.

Deep Learning for Extreme Multi-label Text Classification

This paper presents the first attempt at applying deep learning to XMTC, with a family of new Convolutional Neural Network models which are tailored for multi-label classification in particular.

Neural Architectures for Fine-grained Entity Type Classification

This work investigates several neural network architectures for fine-grained entity type classification and establishes that the attention mechanism learns to attend over syntactic heads and the phrase containing the mention, both of which are known to be strong hand-crafted features for this task.

Multi-instance Multi-label Learning for Relation Extraction

This work proposes a novel approach to multi-instance multi-label learning for RE, which jointly models all the instances of a pair of entities in text and all their labels using a graphical model with latent variables that performs competitively on two difficult domains.

A Review on Multi-Label Learning Algorithms

This paper aims to provide a timely review on this area with emphasis on state-of-the-art multi-label learning algorithms with relevant analyses and discussions.

Multi-instance multi-label learning