Extrapolating paths with graph neural networks

@article{Cordonnier2019ExtrapolatingPW,
  title={Extrapolating paths with graph neural networks},
  author={Jean-Baptiste Cordonnier and Andreas Loukas},
  journal={ArXiv},
  year={2019},
  volume={abs/1903.07518}
}
We consider the problem of path inference: given a path prefix, i.e., a partially observed sequence of nodes in a graph, we want to predict which nodes are in the missing suffix. In particular, we focus on natural paths occurring as a by-product of the interaction of an agent with a network---a driver on the transportation network, an information seeker in Wikipedia, or a client in an online shop. Our interest is sparked by the realization that, in contrast to shortest-path problems, natural… 

Figures and Tables from this paper

Graph Neural Networks: Taxonomy, Advances, and Trends

TLDR
A novel taxonomy for the graph neural networks is provided, and up to 327 relevant literatures are referred to to show the panorama of the graph Neural networks.

Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs

TLDR
This work uses a neural network to parametrize a probability distribution over sets and shows that when the network is optimized w.r.t. a suitably chosen loss, the learned distribution contains, with controlled probability, a low-cost integral solution that obeys the constraints of the combinatorial problem.

Principled Simplicial Neural Networks for Trajectory Prediction

TLDR
A simple convolutional architecture is proposed, rooted in tools from algebraic topology, for the problem of trajectory prediction, and it is shown that it obeys all three of these properties when an odd, nonlinear activation function is used.

SQALER: Scaling Question Answering by Decoupling Multi-Hop and Logical Reasoning

TLDR
This paper shows that multi-hop and more complex logical reasoning can be accomplished separately without losing expressive power, and proposes an approach to multi-Hop reasoning that scales linearly with the number of relation types in the graph, which is usually significantly smaller than thenumber of edges or nodes.

Wikipedia Reader Navigation: When Synthetic Data Is Enough

TLDR
This study systematically quantify the differences between real navigation sequences and synthetic sequences generated from the clickstream data, in 6 analyses across 8 Wikipedia language versions, and finds that the differences are statistically significant, but with small effect sizes.

Simplicial Attention Neural Networks

The aim of this work is to introduce Simplicial Attention Neural Networks (SANs), i.e., novel neural architectures that operate on data defined on simplicial complexes leveraging masked

Generative Models for City-Specific Vehicle Trajectories

TLDR
This work proposes a method of generating new samples of vehicle trajectory data for a given city, leveraging existing GAN models with novel data representations, and using these data representations to model vehicle trajectories in cities.

Data integration in systems genetics and aging research

TLDR
A large collection of “omics” and phenotype data derived from the BXD mouse genetic diversity panel is used to explore how good data management practices, as fostered by the FAIR principles, paired with an explainable artificial intelligence framework, can provide solutions to decipher the complex roots of age-related diseases.

References

SHOWING 1-10 OF 27 REFERENCES

node2vec: Scalable Feature Learning for Networks

TLDR
In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.

Inductive Representation Learning on Large Graphs

TLDR
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

How Powerful are Graph Neural Networks?

TLDR
This work characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures, and develops a simple architecture that is provably the most expressive among the class of GNNs.

Human wayfinding in information networks

TLDR
A large-scale study of human wayfinding, in which, given a network of links between the concepts of Wikipedia, people play a game of finding a short path from a given start to a given target concept by following hyperlinks.

Relational inductive biases, deep learning, and graph networks

TLDR
It is argued that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective.

DeepWalk: online learning of social representations

TLDR
DeepWalk is an online learning algorithm which builds useful incremental results, and is trivially parallelizable, which make it suitable for a broad class of real world applications such as network classification, and anomaly detection.

Semi-Supervised Classification with Graph Convolutional Networks

TLDR
A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.

Modeling Trajectories with Recurrent Neural Networks

TLDR
Two RNN-based models are designed which can make full advantage of the strength of RNN to capture variable length sequence and meanwhile to address the constraints of topological structure on trajectory modeling.

Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering

TLDR
This work presents a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs.

Long Short-Term Memory

TLDR
A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.