• Corpus ID: 47018956

Representation Learning on Graphs with Jumping Knowledge Networks

@inproceedings{Xu2018RepresentationLO,
  title={Representation Learning on Graphs with Jumping Knowledge Networks},
  author={Keyulu Xu and Chengtao Li and Yonglong Tian and Tomohiro Sonobe and Ken-ichi Kawarabayashi and Stefanie Jegelka},
  booktitle={ICML},
  year={2018}
}
Recent deep learning approaches for representation learning on graphs follow a neighborhood aggregation procedure. [] Key Method To adapt to local neighborhood properties and tasks, we explore an architecture -- jumping knowledge (JK) networks -- that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation. In a number of experiments on social, bioinformatics and citation networks, we demonstrate that our model achieves state-of-the-art performance…

Figures and Tables from this paper

Graphs, Entities, and Step Mixture

A new graph neural network that considers both edge-based neighborhood relationships and node-based entity features, i.e. Graph Entities with Step Mixture via random walk (GESM), which achieves state-of-the-art or comparable performances on eight benchmark graph datasets comprising transductive and inductive learning tasks.

Local Augmentation for Graph Neural Networks

Local augmentation is a general framework that can be applied to any GNN model in a plug-and-play manner and samples feature vectors associated with each node from the learned conditional distribution as additional input for the backbone model at each training iteration.

Learning How to Propagate Messages in Graph Neural Networks

This paper presents learning to propagate, a general learning framework that not only learns the GNN parameters for prediction but more importantly, can explicitly learn the interpretable and personalized propagate strategies for different nodes and various types of graphs.

Learning Robust Node Representations on Graphs

The stability of node representations is introduced in addition to the smoothness and identifiability, and a novel method called contrastive graph neural networks (CGNN) is developed that learns robust node representations in an unsupervised manner.

NAFS: A Simple yet Tough-to-beat Baseline for Graph Representation Learning

NAFS is presented, a simple non-parametric method that constructs node representations without parameter learning that outperforms the state-of-the-art GNNs on these tasks and mitigates the aforementioned two limitations of most learning-based GNN counterparts.

GraphAIR: Graph Representation Learning with Neighborhood Aggregation and Interaction

Graph Neural Networks with Feature and Structure Aware Random Walk

This paper generalizes the graph Laplacian to digraph based on the proposed Feature-Aware PageRank algorithm, which simultaneously considers the graph directionality and long-distance feature similarity between nodes, and develops a model that adaptively learns the directionality of the graph, and exploits the underlying long- distance correlations between nodes.

Active Learning for Graph Neural Networks via Node Feature Propagation

A new method is proposed, which uses node feature propagation followed by K-Medoids clustering of the nodes for instance selection in active learning in node classification tasks and outperforms other representative baseline methods consistently and significantly.

Discovering Localized Information for Heterogeneous Graph Node Representation Learning

This paper proposes a model without any a priori selection of meta-paths, using the classical Graph Convolution Network (GCN) model as a tool to aggregate node features and then aggregate the context graph feature vectors to produce the target node's feature representation.

Subgraph Neural Networks

A novel subgraph routing mechanism that propagates neural messages between the subgraph's components and randomly sampled anchor patches from the underlying graph, yielding highly accurate subgraph representations, as well as designing a series of new synthetic and real-world subgraph datasets.
...

References

SHOWING 1-10 OF 33 REFERENCES

node2vec: Scalable Feature Learning for Networks

In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.

Inductive Representation Learning on Large Graphs

GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

Gated Graph Sequence Neural Networks

This work studies feature learning techniques for graph-structured inputs and achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.

Semi-Supervised Classification with Graph Convolutional Networks

A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.

Weisfeiler-Lehman Graph Kernels

A family of efficient kernels for large graphs with discrete node labels based on the Weisfeiler-Lehman test of isomorphism on graphs that outperform state-of-the-art graph kernels on several graph classification benchmark data sets in terms of accuracy and runtime.

Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering

This work presents a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs.

DeepWalk: online learning of social representations

DeepWalk is an online learning algorithm which builds useful incremental results, and is trivially parallelizable, which make it suitable for a broad class of real world applications such as network classification, and anomaly detection.

LINE: Large-scale Information Network Embedding

A novel network embedding method called the ``LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and/or weighted, and optimizes a carefully designed objective function that preserves both the local and global network structures.

Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs

This paper proposes a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features and test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches.

Collective Classification in Network Data

This article introduces four of the most widely used inference algorithms for classifying networked data and empirically compare them on both synthetic and real-world data.