• Corpus ID: 22787957

A Note on Learning Algorithms for Quadratic Assignment with Graph Neural Networks

@article{Nowak2017ANO,
  title={A Note on Learning Algorithms for Quadratic Assignment with Graph Neural Networks},
  author={Alex W. Nowak and Soledad Villar and Afonso S. Bandeira and Joan Bruna},
  journal={ArXiv},
  year={2017},
  volume={abs/1706.07450}
}
Inverse problems correspond to a certain type of optimization problems formulated over appropriate input distributions. Recently, there has been a growing interest in understanding the computational hardness of these optimization problems, not only in the worst case, but in an average-complexity sense under this same input distribution. In this revised note, we are interested in studying another aspect of hardness, related to the ability to learn how to solve a problem by simply observing a… 

Figures from this paper

Annealed Training for Combinatorial Optimization on Graphs

This work proposes a simple but effective annealed training framework for CO problems that transforms CO problems into unbiased energy-based models (EBMs), carefully selecting the penalties terms so as to make the EBMs as smooth as possible.

Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs

This work uses a neural network to parametrize a probability distribution over sets and shows that when the network is optimized w.r.t. a suitably chosen loss, the learned distribution contains, with controlled probability, a low-cost integral solution that obeys the constraints of the combinatorial problem.

Combinatorial Optimization with Graph Convolutional Networks and Guided Tree Search

Experimental results demonstrate that the presented approach substantially outperforms recent deep learning work, and performs on par with highly optimized state-of-the-art heuristic solvers for some NP-hard problems.

Graph convolutional neural networks for the travelling salesman problem

This paper introduces a novel deep learning approach for approximately solving the most famous NP-hard problem in recent history, the Travelling Salesman Problem and focuses on the 2D Euclidean TSP and uses Graph Convolutional Neural Networks and beam search to predict a valid TSP tour given an input graph with up to 100 nodes.

Solve routing problems with a residual edge-graph attention neural network

Boosting Graph Search with Attention Network for Solving the General Orienteering Problem

This paper proposes a novel combination of a variant beam search algorithm and a learned heuristic for solving the general orienteering problem and acquires the heuristic with an attention network that takes the distances among nodes as input, and learns it via a reinforcement learning framework.

Reversible Action Design for Combinatorial Optimization with Reinforcement Learning

A general RL framework is proposed that not only exhibits state-of-the-art empirical perfor- mance but also generalizes to a variety class of COPs and performance improvement is achieved against a set of learning-based and heuristic baselines.

Learning Permutations with Sinkhorn Policy Gradient

The empirical results show that agents trained with SPG can perform competitively on sorting, the Euclidean TSP, and matching tasks, and observe that SPG is significantly more data efficient at the matching task than the baseline methods, which indicates thatSPG is conducive to learning representations that are useful for reasoning about permutations.

Projected power iteration for network alignment

This work proposes the algorithm Projected Power Alignment, which is a projected power iteration version of EigenAlign, a fast spectral method with convergence guarantees for Erdős-Renyí graphs, and describes the theory that may be used to provide performance guarantees for Projected power Alignment.

Learning the travelling salesperson problem requires rethinking generalization

This work presents an end-to-end neural combinatorial optimization pipeline that identifies the inductive biases, model architectures and learning algorithms that promote generalization to instances larger than those seen in training, and provides the first principled investigation into such zero-shot generalization.
...

References

SHOWING 1-10 OF 37 REFERENCES

Learning Combinatorial Optimization Algorithms over Graphs

This paper proposes a unique combination of reinforcement learning and graph embedding that behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of agraph embedding network capturing the current state of the solution.

Graph Matching: Relax at Your Own Risk

It is proved that an indefinite relaxation (when solved exactly) almost always discovers the optimal permutation, while a common convex relaxation almost always fails to discover the optimalpermutation.

Projected power iteration for network alignment

This work proposes the algorithm Projected Power Alignment, which is a projected power iteration version of EigenAlign, a fast spectral method with convergence guarantees for Erdős-Renyí graphs, and describes the theory that may be used to provide performance guarantees for Projected power Alignment.

Supervised Community Detection with Line Graph Neural Networks

This work presents a novel family of Graph Neural Networks (GNNs) for solving community detection problems in a supervised learning setting and shows that, in a data-driven manner and without access to the underlying generative models, they can match or even surpass the performance of the belief propagation algorithm on binary and multi-class stochastic block models.

Semidefinite programming approach for the quadratic assignment problem with a sparse graph

  • José F. S. Bravo FerreiraY. KhooA. Singer
  • Computer Science
    Comput. Optim. Appl.
  • 2018
A new SDP relaxation involving a number of positive semidefinite matrices of dimension O(n) produces strong bounds on quadratic assignment problems where one of the graphs is sparse with reduced computational complexity and running times, and can be used in the context of nuclear magnetic resonance spectroscopy to tackle the assignment problem.

Graph matching: relax or not?

It is proved that for friendly graphs, the convex relaxation is guaranteed to find the exact isomorphism or certify its inexistence and in many cases, the graph matching problem can be further harmlessly relaxed to a convex quadratic program with only n separable linear equality constraints, which is substantially more efficient than the standard relaxation involving 2n equality and n^2 inequality constraints.

Spectral Alignment of Networks

This paper proposes a network alignment framework that uses an orthogonal relaxation of the underlying QAP in a maximum weight bipartite matching optimization, and generalizes the objective function of the network alignment problem to consider both matched and mismatched interactions in a standard QAP formulation.

Neural Networks with Finite Intrinsic Dimension have no Spurious Valleys

Focusing on a class of two-layer neural networks defined by smooth activation functions, it is proved that as soon as the hidden layer size matches the intrinsic dimension of the reproducing space, defined as the linear functional space generated by the activations, no spurious valleys exist, thus allowing the existence of descent directions.

Neural Combinatorial Optimization with Reinforcement Learning

A framework to tackle combinatorial optimization problems using neural networks and reinforcement learning, and Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes.

Community Detection with Graph Neural Networks

This work embeds the resulting class of algorithms within a generic family of graph neural networks and shows that they can reach detection thresholds in a purely data-driven manner, without access to the underlying generative models and with no parameter assumptions.