# A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Graphs

@article{Wang2021ABF, title={A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Graphs}, author={Runzhong Wang and Zhigang Hua and Gan Liu and Jiayi Zhang and Junchi Yan and Feng Qi and Shuang Yang and Jun Zhou and Xiaokang Yang}, journal={ArXiv}, year={2021}, volume={abs/2106.04927} }

Combinatorial Optimization (CO) has been a long-standing challenging research topic featured by its NP-hard nature. Traditionally such problems are approximately solved with heuristic algorithms which are usually fast but may sacriﬁce the solution quality. Currently, machine learning for combinatorial optimization (MLCO) has become a trending research topic, but most existing MLCO methods treat CO as a single-level optimization by directly learning the end-to-end solutions, which are hard to…

## Figures and Tables from this paper

## 8 Citations

### A General Framework for Evaluating Robustness of Combinatorial Optimization Solvers on Graphs

- Computer Science
- 2021

The first practically feasible robustness metric for general combinatorial optimization solvers is developed, and a no worse optimal cost guarantee thus do not require optimal solutions, and the non-differentiable challenge is tackled by resorting to black-box adversarial attack methods.

### One Model, Any CSP: Graph Neural Networks as Fast Global Search Heuristics for Constraint Satisfaction

- Computer ScienceArXiv
- 2022

This work proposes a universal Graph Neural Network architecture which can be trained as an end-2-end search heuristic for any Constraint Satisfaction Problem (CSP) and outperforms prior approaches for neural combinatorial optimization by a substantial margin.

### Subgraph Matching via Query-Conditioned Subgraph Matching Neural Networks and Bi-Level Tree Search

- Computer ScienceArXiv
- 2022

N-BLS is proposed with two innovations to tackle the challenges of Subgraph Matching, a novel encoder-decoder neural network architecture to dynamically compute the matching information between the query and the target graphs at each search state and a Monte Carlo Tree Search enhanced bi-level search framework for training the policy and value networks.

### Unsupervised Learning for Combinatorial Optimization with Principled Objective Design

- Computer Science
- 2022

This work proposes an unsupervised learning framework for CO problems that follows a standard relaxation-plus-rounding approach and adopts neural networks to parameterize the relaxed solutions so that simple back-propagation can train the model end-to-end.

### Neural Topological Ordering for Computation Graphs

- Computer ScienceArXiv
- 2022

This paper considers the problem of finding an optimal topological order on a directed acyclic graph with focus on the memory minimization problem which arises in compilers, and proposes an end-to-end machine learning based approach for topological ordering using an encoder-decoder framework.

### Unsupervised Learning for Combinatorial Optimization with Principled Objective Relaxation

- Computer ScienceArXiv
- 2022

This work proposes an unsupervised learning framework for CO problems that follows a standard relaxation-plus-rounding approach and adopts neural networks to parameterize the relaxed solutions so that simple back-propagation can train the model end-to-end.

### LeNSE: Learning To Navigate Subgraph Embeddings for Large-Scale Combinatorial Optimisation

- BusinessICML
- 2022

Combinatorial Optimisation problems arise in several application domains and are often formulated in terms of graphs. Many of these problems are NP-hard, but exact solutions are not always needed.…

### Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness

- Computer ScienceICLR
- 2022

It is shown empirically that the assessed neural solvers do not generalize well w.r.t. small perturbations of the problem instance, and with such perturbATIONS, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.

## References

SHOWING 1-10 OF 77 REFERENCES

### Generalize a Small Pre-trained Model to Arbitrarily Large TSP Instances

- Computer ScienceAAAI
- 2021

This paper tries to train a small-scale model, which could be repetitively used to build heat maps for TSP instances of arbitrarily large size, based on a series of techniques such as graph sampling, graph converting and heat maps merging.

### An Efficient Graph Convolutional Network Technique for the Travelling Salesman Problem

- Computer ScienceArXiv
- 2019

This paper introduces a new learning-based approach for approximately solving the Travelling Salesman Problem on 2D Euclidean graphs. We use deep Graph Convolutional Networks to build efficient TSP…

### Attention, Learn to Solve Routing Problems!

- Computer ScienceICLR
- 2019

A model based on attention layers with benefits over the Pointer Network is proposed and it is shown how to train this model using REINFORCE with a simple baseline based on a deterministic greedy rollout, which is more efficient than using a value function.

### Graph edit distance as a quadratic program

- Computer Science2016 23rd International Conference on Pattern Recognition (ICPR)
- 2016

This paper proposes a binary quadratic programming problem whose global minimum corresponds to the exact GED and adapts the integer projected fixed point algorithm, initially designed for the QAP, to efficiently compute an approximate GED by finding an interesting local minimum.

### Semi-Supervised Classification with Graph Convolutional Networks

- Computer ScienceICLR
- 2017

A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.

### Pointer Networks

- Computer ScienceNIPS
- 2015

A new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence using a recently proposed mechanism of neural attention, called Ptr-Nets, which improves over sequence-to-sequence with input attention, but also allows it to generalize to variable size output dictionaries.

### An effective implementation of the Lin-Kernighan traveling salesman heuristic

- Computer ScienceEur. J. Oper. Res.
- 2000

### TAP-Net: Transport-and-Pack using Reinforcement Learning

- Art
- 2020

Fig. 1. Given an initial spatial configuration of boxes (a), our neural network, TAP-Net, iteratively transports and packs (b) the boxes compactly into a target container (c). TAP-Net is trained to…

### Learning to Dispatch for Job Shop Scheduling via Deep Reinforcement Learning

- Computer ScienceNeurIPS
- 2020

This paper proposes to automatically learn priority dispatching rule (PDR) via an end-to-end deep reinforcement learning agent, exploiting the disjunctive graph representation of JSSP, and proposes a Graph Neural Network based scheme to embed the states encountered during solving.

### Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs

- Computer ScienceNeurIPS
- 2020

This work uses a neural network to parametrize a probability distribution over sets and shows that when the network is optimized w.r.t. a suitably chosen loss, the learned distribution contains, with controlled probability, a low-cost integral solution that obeys the constraints of the combinatorial problem.