• Corpus ID: 238354126

TENT: Text Classification Based on ENcoding Tree Learning

@article{Zhang2021TENTTC,
  title={TENT: Text Classification Based on ENcoding Tree Learning},
  author={Chong Zhang and Junran Wu and He Zhu and Ke Xu},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.02047}
}
Text classification is a primary task in natural language processing (NLP). Recently, graph neural networks (GNNs) have developed rapidly and been applied to text classification tasks. Although more complex models tend to achieve better performance, research highly depends on the computing power of the device used. In this article, we propose TENT1 to obtain better text classification performance and reduce the reliance on computing power. Specifically, we first establish a dependency analysis… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 17 REFERENCES
Every Document Owns Its Structure: Inductive Text Classification via Graph Neural Networks
TLDR
This work proposes TextING for inductive text classification via GNN, which first builds individual graphs for each document and then uses GNN to learn the fine-grained word representations based on their local structure, which can also effectively produce embeddings for unseen words in the new document.
Text Level Graph Neural Network for Text Classification
TLDR
This work proposes a new GNN based model that builds graphs for each input text with global parameters sharing instead of a single graph for the whole corpus, which removes the burden of dependence between an individual text and entire corpus which support online testing, but still preserve global information.
Graph Convolutional Networks for Text Classification
TLDR
This work builds a single text graph for a corpus based on word co-occurrence and document word relations, then learns a Text Graph Convolutional Network (Text GCN) for the corpus, which jointly learns the embeddings for both words and documents as supervised by the known class labels for documents.
Recurrent Neural Network for Text Classification with Multi-Task Learning
TLDR
This paper uses the multi-task learning framework to jointly learn across multiple related tasks based on recurrent neural network to propose three different mechanisms of sharing information to model text with task-specific and shared layers.
Structural Optimization Makes Graph Classification Simpler and Better
TLDR
Based on an optimization method, the feasibility of improving graph classification performance while simplifying the model learning process is investigated, and a novel feature combination scheme, termed hierarchical reporting, for encoding trees is proposed.
Bag of Tricks for Efficient Text Classification
TLDR
A simple and efficient baseline for text classification is explored that shows that the fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation.
Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
TLDR
This paper conducts a point-by-point comparative study between Simple Word-Embedding-based Models (SWEMs), consisting of parameter-free pooling operations, relative to word-embedding-based RNN/CNN models, and proposes two additional pooling strategies over learned word embeddings: a max-pooling operation for improved interpretability and a hierarchical pooling operation, which preserves spatial information within text sequences.
Convolutional Neural Networks for Sentence Classification
TLDR
The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification, and are proposed to allow for the use of both task-specific and static vectors.
GloVe: Global Vectors for Word Representation
TLDR
A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Universal Dependency Parsing from Scratch
TLDR
A complete neural pipeline system that takes raw text as input, and performs all tasks required by the shared task, ranging from tokenization and sentence segmentation, to POS tagging and dependency parsing is introduced.
...
...