• Corpus ID: 233181947

Graph Partitioning and Sparse Matrix Ordering using Reinforcement Learning

@article{Gatti2021GraphPA,
  title={Graph Partitioning and Sparse Matrix Ordering using Reinforcement Learning},
  author={Alice Gatti and Zhixiong Hu and Pieter Ghysels and Esmond G. Ng and Tess E. Smidt},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.03546}
}
We present a novel method for graph partitioning, based on reinforcement learning and graph convolutional neural networks. The new reinforcement learning based approach is used to refine a given partitioning obtained on a coarser representation of the graph, and the algorithm is applied recursively. The neural network is implemented using graph attention layers, and trained using an advantage actor critic (A2C) agent. We present two variants, one for finding an edge separator that minimizes the… 

References

SHOWING 1-10 OF 60 REFERENCES
Graph Attention Networks
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior
An Approximate Minimum Degree Ordering Algorithm
TLDR
An approximate minimum degree (AMD) ordering algorithm for preordering a symmetric sparse matrix prior to numerical factorization is presented and produces results that are comparable in quality with the best orderings from other minimum degree algorithms.
Simple statistical gradient-following algorithms for connectionist reinforcement learning
TLDR
This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units that are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reInforcement tasks, and they do this without explicitly computing gradient estimates.
An overview of SuperLU: Algorithms, implementation, and user interface
TLDR
An overview of the algorithms, design philosophy, and implementation techniques in the software SuperLU, for solving sparse unsymmetric linear systems, and some examples of how the solver has been used in large-scale scientific applications, and the performance.
Combining Reinforcement Learning with Lin-Kernighan-Helsgaun Algorithm for the Traveling Salesman Problem
TLDR
A variable strategy reinforced approach, denoted as VSR-LKH, is proposed, which combines three reinforcement learning methods (Q-learning, Sarsa and Monte Carlo) with the well-known TSP algorithm, called Lin-Kernighan-Helsgaun (KH).
Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs
TLDR
This work uses a neural network to parametrize a probability distribution over sets and shows that when the network is optimized w.r.t. a suitably chosen loss, the learned distribution contains, with controlled probability, a low-cost integral solution that obeys the constraints of the combinatorial problem.
A State Aggregation Approach for Solving Knapsack Problem with Deep Reinforcement Learning
TLDR
The results demonstrate that the proposed model with the state aggregation strategy not only gives better solutions but also learns in less timesteps, than the one without state aggregation.
MFEM: a modular finite element methods library
SciPy 1.0: fundamental algorithms for scientific computing in Python
TLDR
An overview of the capabilities and development practices of SciPy 1.0 is provided and some recent technical developments are highlighted.
An efficient heuristic procedure for partitioning graphs
TLDR
A heuristic method for partitioning arbitrary graphs which is both effective in finding optimal partitions, and fast enough to be practical in solving large problems is presented.
...
...