The Machine Learning for Combinatorial Optimization Competition (ML4CO): Results and Insights

@article{Gasse2022TheML,
  title={The Machine Learning for Combinatorial Optimization Competition (ML4CO): Results and Insights},
  author={Maxime Gasse and Quentin Cappart and Jonas Charfreitag and Laurent Charlin and Didier Ch'etelat and Antonia Chmiela and Justin Dumouchelle and Ambros M. Gleixner and Aleksandr M. Kazachkov and Elias Boutros Khalil and Pawel Lichocki and Andrea Lodi and Miles Lubin and Chris J. Maddison and Christopher Morris and Dimitri J. Papageorgiou and Augustin Parjadis and Sebastian Pokutta and Antoine Prouvost and Lara Scavuzzo and Giulia Zarpellon and Linxin Yangm and Sha Lai and Akang Wang and Xiaodong Luo and Xiang Zhou and Haohan Huang and Sheng Cheng Shao and Yuanming Zhu and Dong Zhang and Tao Manh Quan and Zixuan Cao and Yang Xu and Zhewei Huang and Shuchang Zhou and Cheng Binbin and He Minggui and Hao Hao and Zhang Zhiyu and An Zhiwu and Mao Kun},
  journal={ArXiv},
  year={2022},
  volume={abs/2203.02433}
}
Combinatorial optimization is a well-established area in operations research and computer science. Until recently, its methods have focused on solving problem instances in isolation, ignoring that they often stem from related data distributions in practice. However, recent years have seen a surge of interest in using machine learning as a new approach for solving combinatorial problems, either directly as solvers or by enhancing exact solvers. Based on this context, the ML4CO aims at improving… 

Tables from this paper

References

SHOWING 1-10 OF 25 REFERENCES
A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning
TLDR
This paper proposes a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting and demonstrates that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.
Branching rules revisited. Operations Research Letters
  • 2005
An Empirical Study of Assumptions in Bayesian Optimisation
TLDR
The majority of hyper-parameter tuning tasks exhibit heteroscedasticity and non-stationarity, multi-objective acquisition ensembles with Pareto-front solutions significantly improve queried configurations, and robust acquisition maximisation affords empirical advantages relative to its nonrobust counterparts are concluded.
Solving Mixed Integer Programs Using Neural Networks
TLDR
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Ecole: A Gym-like Library for Machine Learning in Combinatorial Optimization Solvers
TLDR
Ecole exposes several key decision tasks arising in general-purpose combinatorial optimization solvers as control problems over Markov decision processes to make this library a standardized platform that will lower the bar of entry and accelerate innovation in the field.
Hybrid Models for Learning to Branch
TLDR
This work addresses the first question in the negative, and addresses the second question by proposing a new hybrid architecture for efficient branching on CPU machines, which combines the expressive power of GNNs with computationally inexpensive multi-layer perceptrons (MLP) for branching.
Combining Reinforcement Learning and Constraint Programming for Combinatorial Optimization
TLDR
This work proposes a general and hybrid approach, based on DRL and CP, for solving combinatorial optimization problems, and experimentally shows that the framework introduced outperforms the stand-alone RL and CP solutions, while being competitive with industrial solvers.
Exact Combinatorial Optimization with Graph Convolutional Neural Networks
TLDR
A new graph convolutional neural network model is proposed for learning branch-and-bound variable selection policies, which leverages the natural variable-constraint bipartite graph representation of mixed-integer linear programs.
On learning and branching: a survey
TLDR
This paper surveys learning techniques to deal with the two most crucial decisions in the branch-and-bound algorithm for Mixed-Integer Linear Programming, namely variable and node selections and describes the recent algorithms that instead explicitly incorporate machine learning paradigms.
Combinatorial Optimization with Graph Convolutional Networks and Guided Tree Search
TLDR
Experimental results demonstrate that the presented approach substantially outperforms recent deep learning work, and performs on par with highly optimized state-of-the-art heuristic solvers for some NP-hard problems.
...
...