Goal-directed graph construction using reinforcement learning

  title={Goal-directed graph construction using reinforcement learning},
  author={Victor-Alexandru Darvariu and Stephen Hailes and Mirco Musolesi},
  journal={Proceedings of the Royal Society A},
Graphs can be used to represent and reason about systems and a variety of metrics have been devised to quantify their global characteristics. However, little is currently known about how to construct a graph or improve an existing one given a target objective. In this work, we formulate the construction of a graph as a decision-making process in which a central agent creates topologies by trial and error and receives rewards proportional to the value of the target objective. By means of this… 

Figures and Tables from this paper

Planning Spatial Networks with Monte Carlo Tree Search
The Monte Carlo Tree Search framework for planning in this domain is adopted, prioritizing the optimality of final solutions over the speed of policy evaluation and the suitability of this approach for improving the global efficiency and attack resilience of a variety of synthetic and real-world networks, including Internet backbone networks and metro systems.
Challenges and Opportunities in Deep Reinforcement Learning with Graph Neural Networks: A Comprehensive review of Algorithms and Applications
A comprehensive review of the applicability and benefits of fusing GNN with DRL for graphstructured environments, especially in terms of increasing generalizability and reducing computational complexity.
Dynamic Network Reconfiguration for Entropy Maximization using Deep Reinforcement Learning
The general ability of the proposed method to obtain better entropy gains than random rewiring on synthetic and real-world graphs while being computationally inexpensive, as well as being able to generalize to larger graphs than those seen during training is demonstrated.


GraphOpt: Learning Optimization Models of Graph Formation
An end-to-end framework that jointly learns an implicit model of graph structure formation and discovers an underlying optimization mechanism in the form of a latent objective function that can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
Graph Convolutional Policy Network for Goal-Directed Molecular Graph Generation
Graph Convolutional Policy Network (GCPN) is proposed, a general graph convolutional network based model for goal-directed graph generation through reinforcement learning that can achieve 61% improvement on chemical property optimization over state-of-the-art baselines while resembling known molecules, and achieve 184% improved on the constrained property optimization task.
GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models
The experiments show that GraphRNN significantly outperforms all baselines, learning to generate diverse graphs that match the structural characteristics of a target set, while also scaling to graphs 50 times larger than previous deep models.
Learning Combinatorial Optimization Algorithms over Graphs
This paper proposes a unique combination of reinforcement learning and graph embedding that behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of agraph embedding network capturing the current state of the solution.
Learning Deep Generative Models of Graphs
This work is the first and most general approach for learning generative models over arbitrary graphs, and opens new directions for moving away from restrictions of vector- and sequence-like knowledge representations, toward more expressive and flexible relational data structures.
Efficient Graph Generation with Graph Recurrent Attention Networks
A new family of efficient and expressive deep generative models of graphs, called Graph Recurrent Attention Networks (GRANs), which better captures the auto-regressive conditioning between the already-generated and to-be-generated parts of the graph using Graph Neural Networks (GNNs) with attention.
Neural Combinatorial Optimization with Reinforcement Learning
A framework to tackle combinatorial optimization problems using neural networks and reinforcement learning, and Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes.
Adversarial Attack on Graph Structured Data
This paper proposes a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier, and uses both synthetic and real-world data to show that a family of Graph Neural Network models are vulnerable to adversarial attacks.
Relational inductive biases, deep learning, and graph networks
It is argued that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective.
GNNExplainer: Generating Explanations for Graph Neural Networks
GnExplainer is proposed, the first general, model-agnostic approach for providing interpretable explanations for predictions of any GNN-based model on any graph-based machine learning task.