Graph Colouring Meets Deep Learning: Effective Graph Neural Network Models for Combinatorial Problems

@article{Lemos2019GraphCM,
  title={Graph Colouring Meets Deep Learning: Effective Graph Neural Network Models for Combinatorial Problems},
  author={Henrique Lemos and Marcelo O. R. Prates and Pedro H. C. Avelar and L. Lamb},
  journal={2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)},
  year={2019},
  pages={879-885}
}
Deep learning has consistently defied state-of-the-art techniques in many fields over the last decade. However, we are just beginning to understand the capabilities of neural learning in symbolic domains. Deep learning architectures that employ parameter sharing over graphs can produce models which can be trained on complex properties of relational data. These include highly relevant NP-Complete problems, such as SAT and TSP. In this work, we showcase how Graph Neural Networks (GNN) can be… 
Deep Learning Chromatic and Clique Numbers of Graphs
TLDR
Deep learning models are developed to predict the chromatic number and maximum clique size of graphs, both of which represent classical NP-complete combinatorial optimization problems encountered in graph theory.
Can Graph Neural Networks Learn to Solve MaxSAT Problem?
TLDR
Two kinds of GNN models are built to learn the solution of MaxSAT instances from benchmarks, and it is shown that GNNs have attractive potential to solveMaxSAT problem through experimental evaluation and theoretical explanation of the effect.
Learning to solve NP-complete problems
TLDR
This work shows that Graph Neural Networks are powerful enough to solve NP-Complete problems which combine symbolic and numeric data, in addition to proposing a modern reformulation of the meta-model.
Learning the Satisfiability of Pseudo-Boolean Problem with Graph Neural Networks
TLDR
A GNN-based classification model to learn the satisfiability of pseudo-Boolean (PB) problem is proposed and experiments indicate that GNN has great potential in solving constraint satisfaction problems with numerical coefficients.
Computing Steiner Trees using Graph Neural Networks
TLDR
This paper tackles the Steiner Tree Problem and suggests that the out-of-the-box application of GNN methods does worse than the classic 2-approximation method, but when combined with a greedy shortest path construction, it even does slightly better than the 2- approximation algorithm.
Combinatorial Optimization with Physics-Inspired Graph Neural Networks
TLDR
The graph neural network optimizer performs on par or outperforms existing solvers, with the ability to scale beyond the state of the art to problems with millions of variables.
HyperGCN: A New Method For Training Graph Convolutional Networks on Hypergraphs
TLDR
This work proposes HyperGCN, a novel GCN for SSL on attributed hypergraphs, and shows how it can be used as a learning-based approach for combinatorial optimisation on NP-hard hypergraph problems.
HyperGCN: Hypergraph Convolutional Networks for Semi-Supervised Classification
TLDR
This work proposes HyperGCN, an SSL method which uses a layer-wise propagation rule for convolutional neural networks operating directly on hypergraphs, which is the first principled adaptation of GCNs to hyper graphs.
TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network
TLDR
This work introduces the first neural optimization framework to solve a classical instance of the tiling problem, and builds a graph convolutional neural network, coined TilinGNN, to progressively propagate and aggregate features over graph edges and predict tile placements.
Deep Learning-based Approximate Graph-Coloring Algorithm for Register Allocation
Graph-coloring is an NP-hard problem which has a myriad of applications. Register allocation, which is a crucial phase of a good optimizing compiler, relies on graph coloring. Hence, an efficient
...
1
2
3
...

References

SHOWING 1-10 OF 39 REFERENCES
Typed Graph Networks
TLDR
The original Graph Neural Network model is revisited and it is shown that it generalises many of the recent models, which in turn benefit from the insight of thinking about vertex \textbf{types}.
Learning to Solve NP-Complete Problems - A Graph Neural Network for the Decision TSP
TLDR
This paper shows that GNNs can learn to solve the decision variant of the Traveling Salesperson Problem (TSP), a highly relevant $\mathcal{NP}$-Complete problem.
A new model for learning in graph domains
TLDR
A new neural model, called graph neural network (GNN), capable of directly processing graphs, which extends recursive neural networks and can be applied on most of the practically useful kinds of graphs, including directed, undirected, labelled and cyclic graphs.
Relational inductive biases, deep learning, and graph networks
TLDR
It is argued that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective.
Recurrent Relational Networks for Complex Relational Reasoning
TLDR
R recurrent relational networks are introduced which increase the suite of solvable tasks to those that require an order of magnitude more steps of relational reasoning and are applied to the BaBi textual QA dataset solving 19/20 tasks.
The Graph Neural Network Model
TLDR
A new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains, and implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space.
Neural Message Passing for Quantum Chemistry
TLDR
Using MPNNs, state of the art results on an important molecular property prediction benchmark are demonstrated and it is believed future work should focus on datasets with larger molecules or more accurate ground truth labels.
Learning a SAT Solver from Single-Bit Supervision
TLDR
Although it is not competitive with state-of-the-art SAT solvers, NeuroSAT can solve problems that are substantially larger and more difficult than it ever saw during training by simply running for more iterations.
Frozen development in graph coloring
TLDR
The 'frozen development' of coloring random graphs is defined, and theoretical and empirical evidence is given to show that the size of the smallest uncolorable subgraphs of threshold graphs becomes large as the number of nodes in graphs increases.
Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon
TLDR
A main point of the paper is seeing generic optimization problems as data points and inquiring what is the relevant distribution of problems to use for learning on a given task.
...
1
2
3
4
...