• Corpus ID: 243847852

Mixed-Integer Optimization with Constraint Learning

@article{Maragno2021MixedIntegerOW,
  title={Mixed-Integer Optimization with Constraint Learning},
  author={Donato Maragno and Holly M. Wiberg and Dimitris Bertsimas and Ş. İlker Birbil and Dick den Hertog and Adejuyigbe O. Fajemisin},
  journal={ArXiv},
  year={2021},
  volume={abs/2111.04469}
}
We establish a broad methodological foundation for mixed-integer optimization with learned constraints. We propose an end-to-end pipeline for data-driven decision making in which constraints and objectives are directly learned from data using machine learning, and the trained models are embedded in an optimization formulation. We exploit the mixed-integer optimization-representability of many machine learning methods, including linear models, decision trees, ensembles, and multi-layer… 

Tightness of prescriptive tree-based mixed-integer optimization formulations

It is proved that fractional extreme points are removed when there are multiple splits on the same feature, and at an extreme, this results in ideal formulations for tree ensembles modeling a one-dimensional feature vector.

Optimization with Constraint Learning: A Framework and Survey

The Good, the Bad, and the Outliers: A Testing Framework for Decision Optimization Model Learning

An open-source framework designed for large-scale testing and solution quality analysis of DO model learning algorithms, a novel way to generate DO ground truth, and a first-of-its-kind, generic, cloud-distributed Ray and Rayvens architecture are introduced.

Optimizing over an ensemble of neural networks

Exper-imental evaluations of the solution methods suggest that using ensembles of neural networks yields more stable and higher quality solutions, compared to single neural networks, and that the optimization algorithm outperforms a state-of-the-art approach in terms of computational time and optimality gaps.

When Deep Learning Meets Polyhedral Theory: A Survey

The main topics emerging from this fast-paced area of work, which bring a fresh perspective to understanding neural networks in more detail as well as to applying linear optimization techniques to train, verify, and reduce the size of such networks are surveyed.

Exact solving scheduling problems accelerated by graph neural networks

This paper applies the graph convolutional neural network from the literature on speeding up general branch&bound solver by learning its branching decisions to the augmented solver, and discusses the interesting question of how much the authors can accelerate solving NP-hard problems in the light of the known limits and impossibility results in AI.

Counterfactual Explanations Using Optimization With Constraint Learning

This work discusses how to leverage an optimization with constraint learning framework for the generation of counterfactual explanations, and how components of this framework readily map to the criteria, and proposes two novel modeling approaches to address data manifold closeness and diversity.

Distributed Solution of Mixed-Integer Programs by ADMM with Closed Duality Gap

This paper introduces a new method to efficiently solve distributed mixed-integer programs (MIP) as arising in problems of distributed machine learning or distributed control. The method is based on

Encoding Carbon Emission Flow in Energy Management: A Compact Constraint Learning Approach

Decarbonizing the energy supply is essential and urgent to mitigate the increasingly visible climate change. Its basis is identifying emission responsibility during power allocation by the carbon

Optimization of Tree Ensembles

This paper theoretically examines the strength of the formulation, provides a hierarchy of approximate formulations with bounds on approximation quality and exploits the structure of the problem to develop two large-scale solution methods, one based on Benders decomposition and one based upon iteratively generating tree split constraints.

Embedding Decision Trees and Random Forests in Constraint Programming

This paper proposes three approaches based on converting a DT into a Multi-valued Decision Diagram, which is then fed to an mdd constraint, and shows how to embed in CP a Random Forest, a powerful type of ensemble classifier based on DTs.

Strong mixed-integer programming formulations for trained neural networks

A generic framework is presented that provides a way to construct sharp or ideal formulations for the maximum of d affine functions over arbitrary polyhedral input domains and corroborate this computationally, showing that these formulations are able to offer substantial improvements in solve time on verification tasks for image classification networks.

One-class synthesis of constraints for Mixed-Integer Linear Programming with C4.5 decision trees

ReLU Networks as Surrogate Models in Mixed-Integer Linear Programs

Auction optimization using regression trees and linear models as integer programs

Optimal classification trees

Optimal classification trees are presented, a novel formulation of the decision tree problem using modern MIO techniques that yields the optimal decision tree for axes-aligned splits and synthetic tests demonstrate that these methods recover the true decision tree more closely than heuristics, refuting the notion that optimal methods overfit the training data.

Optimizing Objective Functions Determined from Random Forests

This work models the problem of optimizing a tree-based ensemble objective with the feasible decisions lie in a polyhedral set as a Mixed Integer Linear Program (MILP) and shows it can be solved to optimality efficiently using Pareto optimal Benders cuts.
...