Text Level Graph Neural Network for Text Classification

@inproceedings{Huang2019TextLG,
  title={Text Level Graph Neural Network for Text Classification},
  author={Lianzhe Huang and Dehong Ma and Sujian Li and Xiaodong Zhang and Houfeng Wang},
  booktitle={Conference on Empirical Methods in Natural Language Processing},
  year={2019}
}
Recently, researches have explored the graph neural network (GNN) techniques on text classification, since GNN does well in handling complex structures and preserving global information. However, previous methods based on GNN are mainly faced with the practical problems of fixed corpus level graph structure which don’t support online testing and high memory consumption. To tackle the problems, we propose a new GNN based model that builds graphs for each input text with global parameters sharing… 

Figures and Tables from this paper

A text classification method based on LSTM and graph attention network

This model builds a separate graph based on the syntactic structure of each document, generates word embeddings with contextual information using LSTM, then learns the inductive representation of words by GAT, and finally fuses all the nodes in the graph together into the document embedding.

Graph topology enhancement for text classification

An asynchronous weighted propagation scheme is proposed, which selectively fuses the topological features with the original features of the word nodes, and integrate document features to predict final results.

Text Graph Transformer for Document Classification

This work introduces a novel Transformer based heterogeneous graph neural network, namely Text Graph Transformer (TG-Transformer), and proposes a mini-batch text graph sampling method that significantly reduces computing and memory costs to handle large-sized corpus.

Text Classification with Attention Gated Graph Neural Network

A novel graph-based model where every document is represented as a text graph and an attention gated graph neural network (AGGNN) is devised to propagate and update the semantic information of each word node from their 1-hop neighbors.

Deep Attention Diffusion Graph Neural Networks for Text Classification

A Deep Attention Diffusion Graph Neural Network (DADGNN) model is proposed to learn text representations, bridging the chasm of interaction difficulties between a word and its distant neighbors.

TW-TGNN: Two Windows Graph-Based Model for Text Classification

A new GNN-based model, namely two windows text gnn model (TW-TGNN), is proposed, which captures adequate global information for the short text which is beneficial for overcoming the insufficient contextual information in the process of the long text classification.

A semantic hierarchical graph neural network for text classification

A new hierarchical graph neural network (HieGNN) is proposed which extracts corresponding information from word-level, sentence-level and document-level respectively and is able to obtain more useful information for classification from samples.

A Sequential Graph Neural Network for Short Text Classification

This work proposes an improved sequence-based feature propagation scheme, which fully uses word representation and document-level word interaction and overcomes the limitations of textual features in short texts.

Every Document Owns Its Structure: Inductive Text Classification via Graph Neural Networks

This work proposes TextING for inductive text classification via GNN, which first builds individual graphs for each document and then uses GNN to learn the fine-grained word representations based on their local structure, which can also effectively produce embeddings for unseen words in the new document.

Word Order is Considerable: Contextual Position-aware Graph Neural Network for Text Classification

  • Ying TanJunli Wang
  • Computer Science
    2022 International Joint Conference on Neural Networks (IJCNN)
  • 2022
The Contextual Position-aware Graph Neural Network for text classification is proposed, which includes the Position- aware Graph Attention module and the Contextual Fusion module, which enables the Context-fused Graph LSTM to learn word contextual representations.
...

References

SHOWING 1-10 OF 31 REFERENCES

Graph Convolutional Networks for Text Classification

This work builds a single text graph for a corpus based on word co-occurrence and document word relations, then learns a Text Graph Convolutional Network (Text GCN) for the corpus, which jointly learns the embeddings for both words and documents as supervised by the known class labels for documents.

Large-Scale Hierarchical Text Classification with Recursively Regularized Deep Graph-CNN

A graph-CNN based deep learning model is proposed to first convert texts to graph-of-words, and then use graph convolution operations to convolve the word graph and regularize the deep architecture with the dependency among labels.

Recurrent Neural Network for Text Classification with Multi-Task Learning

This paper uses the multi-task learning framework to jointly learn across multiple related tasks based on recurrent neural network to propose three different mechanisms of sharing information to model text with task-specific and shared layers.

Graph Attention Networks

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior

Graph Convolutional Encoders for Syntax-aware Neural Machine Translation

We present a simple and effective approach to incorporating syntactic structure into neural attention-based encoder-decoder models for machine translation. We rely on graph-convolutional networks

Sentence-State LSTM for Text Representation

This work investigates an alternative LSTM structure for encoding text, which consists of a parallel state for each word, and shows that the proposed model has strong representation power, giving highly competitive performances compared to stacked BiLSTM models with similar parameter numbers.

Deep Convolutional Networks on Graph-Structured Data

This paper develops an extension of Spectral Networks which incorporates a Graph Estimation procedure, that is test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.

Baselines and Bigrams: Simple, Good Sentiment and Topic Classification

It is shown that the inclusion of word bigram features gives consistent gains on sentiment analysis tasks, and a simple but novel SVM variant using NB log-count ratios as feature values consistently performs well across tasks and datasets.

Inductive Representation Learning on Large Graphs

GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.