Hierarchical Text Classification with Reinforced Label Assignment

@article{Mao2019HierarchicalTC,
  title={Hierarchical Text Classification with Reinforced Label Assignment},
  author={Yuning Mao and Jingjing Tian and Jiawei Han and Xiang Ren},
  journal={ArXiv},
  year={2019},
  volume={abs/1908.10419}
}
While existing hierarchical text classification (HTC) methods attempt to capture label hierarchies for model training, they either make local decisions regarding each label or completely ignore the hierarchy information during inference. To solve the mismatch between training and inference as well as modeling label dependencies in a more principled way, we formulate HTC as a Markov decision process and propose to learn a Label Assignment Policy via deep reinforcement learning to determine where… Expand
Hierarchy-Aware Global Model for Hierarchical Text Classification
TLDR
A novel end-to-end hierarchy-aware global model (HiAGM) with two variants, both of which achieve significant and consistent improvements on three benchmark datasets. Expand
Hierarchy-Aware T5 with Path-Adaptive Mask Mechanism for Hierarchical Text Classification
  • Wei Huang, Chen Liu, +4 authors Guiquan Liu
  • Computer Science
  • ArXiv
  • 2021
TLDR
A novel PAMM-HiA-T5 model for HTC is proposed: a hierarchy-aware T5 model with path-adaptive mask mechanism that not only builds the knowledge of upper-level labels into low-level ones but also introduces path dependency information in label prediction. Expand
Efficient Strategies for Hierarchical Text Classification: External Knowledge and Auxiliary Tasks
TLDR
The combination of the auxiliary task and the additional input of class-definitions significantly enhance the classification accuracy and outperform previous studies, using a drastically reduced number of parameters, in two well-known English datasets. Expand
Multi-Label Text Classification using Attention-based Graph Neural Network
TLDR
A graph attention network-based model is proposed to capture the attentive dependency structure among the labels in Multi-Label Text Classification to achieve similar or better performance compared to the previous state-of-the-art models. Expand
MATCH: Metadata-Aware Text Classification in A Large Hierarchy
TLDR
This paper presents the MATCH1 solution—an end-to-end framework that leverages both metadata and hierarchy information, and proposes different ways to regularize the parameters and output probability of each child label by its parents. Expand
ToHRE: A Top-Down Classification Strategy with Hierarchical Bag Representation for Distantly Supervised Relation Extraction
TLDR
This work forms DSRE as a hierarchical classification task and proposes a novel hierarchical classification framework, which extracts the relation in a top-down manner, and uses a hierarchically-refined representation method to achieve hierarchy-specific representation. Expand
Concept-Based Label Embedding via Dynamic Routing for Hierarchical Text Classification
  • Xuepeng Wang, Li Zhao, Bing Liu, Tao Chen, Feng Zhang, Di Wang
  • Computer Science
  • ACL/IJCNLP
  • 2021
TLDR
This paper proposes a novel concept-based label embedding method that can explicitly represent the concept and model the sharing mechanism among classes for the hierarchical text classification and proves that the proposed model outperforms several state-of-theart methods. Expand
Hierarchical Text Classification of Urdu News using Deep Neural Network
TLDR
The result shows that the proposed method is very effective for hierarchical text classification and it outperforms baseline methods significantly and also achieved good results as compare to deep neural model. Expand
Joint Learning of Hyperbolic Label Embeddings for Hierarchical Multi-label Classification
TLDR
The results show that the joint learning improves over the baseline that employs label co-occurrence based pre-trained hyperbolic embeddings, and the proposed classifiers achieve state-of-the-art generalization on standard benchmarks. Expand
Multi-document Summarization with Maximal Marginal Relevance-guided Reinforcement Learning
TLDR
RL-MMR, Maximal Margin Relevance-guided Reinforcement Learning for MDS is presented, which unifies advanced neural SDS methods and statistical measures used in classical MDS and shows the benefits of incorporating MMR into end-to-end learning when adapting SDS to MDS in terms of both learning effectiveness and efficiency. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 40 REFERENCES
Weakly-Supervised Neural Text Classification
TLDR
This paper proposes a weakly-supervised method that addresses the lack of training data in neural text classification and achieves inspiring performance without requiring excessive training data and outperforms baseline methods significantly. Expand
Deep Learning for Extreme Multi-label Text Classification
TLDR
This paper presents the first attempt at applying deep learning to XMTC, with a family of new Convolutional Neural Network models which are tailored for multi-label classification in particular. Expand
Hierarchical Multi-Label Classification Networks
TLDR
A novel neural network architectures for HMC called HMCN is proposed, capable of simultaneously optimizing local and global loss functions for discovering local hierarchical class-relationships and global information from the entire class hierarchy while penalizing hierarchical violations. Expand
Bayes-Optimal Hierarchical Multilabel Classification
  • Wei Bi, J. Kwok
  • Computer Science
  • IEEE Transactions on Knowledge and Data Engineering
  • 2015
TLDR
This work proposes hierarchical extensions of the Hamming loss and ranking loss which take the mistake at every node of the label hierarchy into consideration, and develops Bayes-optimal predictions that minimize the corresponding risks with the trained model. Expand
Large-Scale Hierarchical Text Classification with Recursively Regularized Deep Graph-CNN
TLDR
A graph-CNN based deep learning model is proposed to first convert texts to graph-of-words, and then use graph convolution operations to convolve the word graph and regularize the deep architecture with the dependency among labels. Expand
Recursive regularization for large-scale classification with hierarchical and graphical dependencies
TLDR
This paper proposes a regularization framework for large-scale hierarchical classification that incorporates the hierarchical dependencies between the class-labels into the regularization structure of the parameters thereby encouraging classes nearby in the hierarchy to share similar model parameters. Expand
End-to-End Reinforcement Learning for Automatic Taxonomy Induction
TLDR
Experiments show that the novel end-to-end reinforcement learning approach to automatic taxonomy induction from a set of terms outperforms prior state-of-the-art taxonomies up to 19.6\% on ancestor F1. Expand
Recurrent Convolutional Neural Networks for Text Classification
TLDR
A recurrent convolutional neural network is introduced for text classification without human-designed features to capture contextual information as far as possible when learning word representations, which may introduce considerably less noise compared to traditional window-based neural networks. Expand
Learning hierarchical multi-category text classification models
TLDR
This work presents an efficient optimization algorithm based on incremental conditional gradient ascent in single-example subspaces spanned by the marginal dual variables that can feasibly optimize training sets of thousands of examples and classification hierarchies consisting of hundreds of nodes. Expand
Decision trees for hierarchical multi-label classification
TLDR
HMC trees outperform HSC and SC trees along three dimensions: predictive accuracy, model size, and induction time, and it is concluded that HMC trees should definitely be considered in HMC tasks where interpretable models are desired. Expand
...
1
2
3
4
...