A Comprehensive Survey on Graph Neural Networks
- Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, Philip S. Yu
- Computer ScienceIEEE Transactions on Neural Networks and Learning…
- 2019
This article provides a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields and proposes a new taxonomy to divide the state-of-the-art GNNs into four categories, namely, recurrent GNNS, convolutional GNN’s, graph autoencoders, and spatial–temporal Gnns.
Graph WaveNet for Deep Spatial-Temporal Graph Modeling
- Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Chengqi Zhang
- Computer ScienceInternational Joint Conference on Artificial…
- 31 May 2019
This paper proposes a novel graph neural network architecture, Graph WaveNet, for spatial-temporal graph modeling by developing a novel adaptive dependency matrix and learn it through node embedding, which can precisely capture the hidden spatial dependency in the data.
Adversarially Regularized Graph Autoencoder for Graph Embedding
- Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Lina Yao, Chengqi Zhang
- Computer ScienceInternational Joint Conference on Artificial…
- 13 February 2018
A novel adversarial graph embedding framework for graph data that encodes the topological structure and node content in a graph to a compact representation, on which a decoder is trained to reconstruct the graph structure.
Connecting the Dots: Multivariate Time Series Forecasting with Graph Neural Networks
- Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Xiaojun Chang, Chengqi Zhang
- Computer ScienceKnowledge Discovery and Data Mining
- 24 May 2020
This paper proposes a general graph neural network framework designed specifically for multivariate time series data that outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets and achieves on-par performance with other approaches on two traffic datasets which provide extra structural information.
DiSAN: Directional Self-Attention Network for RNN/CNN-free Language Understanding
- Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, Chengqi Zhang
- Computer ScienceAAAI Conference on Artificial Intelligence
- 14 September 2017
A novel attention mechanism in which the attention between elements from input sequence(s) is directional and multi-dimensional (i.e., feature-wise) and a light-weight neural net is proposed, based solely on the proposed attention without any RNN/CNN structure, which outperforms complicated RNN models on both prediction quality and time efficiency.
Attributed Graph Clustering: A Deep Attentional Embedding Approach
- Chun Wang, Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Chengqi Zhang
- Computer ScienceInternational Joint Conference on Artificial…
- 1 June 2019
This paper proposes a goal-directed deep learning approach to graph clustering, Deep Attentional Embedded Graph Clustering (DAEGC), which focuses on attributed graphs to sufficiently explore the two sides of information in graphs.
MGAE: Marginalized Graph Autoencoder for Graph Clustering
- C. Wang, Shirui Pan, Guodong Long, Xingquan Zhu, Jing Jiang
- Computer ScienceInternational Conference on Information and…
- 6 November 2017
A marginalized graph convolutional network is proposed to corrupt network node content, allowing node content to interact with network features, and marginalizes the corrupted features in a graph autoencoder context to learn graph feature representations.
Learning Graph Embedding With Adversarial Training Methods
- Shirui Pan, Ruiqi Hu, S. Fung, Guodong Long, Jing Jiang, Chengqi Zhang
- Computer ScienceIEEE Transactions on Cybernetics
- 4 January 2019
This article presents a novel adversarially regularized framework for graph embedding, employing the graph convolutional network as an encoder, that embeds the topological information and node content into a vector representation, from which a graph decoder is further built to reconstruct the input graph.
Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence Modeling
- Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang, Chengqi Zhang
- Computer ScienceInternational Joint Conference on Artificial…
- 31 January 2018
An RNN/CNN-free sentence-encoding model, "reinforced self-attention network (ReSAN)", solely based on ReSA is proposed, which achieves state-of-the-art performance on both the Stanford Natural Language Inference and the Sentences Involving Compositional Knowledge datasets.
Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling
- Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
- Computer ScienceInternational Conference on Learning…
- 3 April 2018
This paper proposes a model, called "bi-directional block self-attention network (Bi-BloSAN), for RNN/CNN-free sequence encoding that achieves or improves upon state-of-the-art accuracy, and shows better efficiency-memory trade-off than existing RNN /CNN/SAN.
...
...