• Corpus ID: 248666093

Optimizing over an ensemble of neural networks

  title={Optimizing over an ensemble of neural networks},
  author={Keliang Wang and Leonardo Lozano and Carlos Henrique Cardonha and David Bergman},
We study optimization problems where the objective function is modeled through feedforward neural networks with rectified linear unit (ReLU) activation. Recent literature has explored the use of a single neural network to model either uncertain or complex elements within an objective function. However, it is well known that ensembles of neural networks produce more stable predictions and have better generalizability than models with single neural networks, which motivates the investigation of… 



Ensembling neural networks: Many could be better than all

Output Range Analysis for Deep Feedforward Neural Networks

An efficient range estimation algorithm that iterates between an expensive global combinatorial search using mixed-integer linear programming problems, and a relatively inexpensive local optimization that repeatedly seeks a local optimum of the function represented by the NN is presented.

A lagrangian propagator for artificial neural networks in constraint programming

A new network-level propagator based on a non-linear Lagrangian relaxation that is solved with a subgradient algorithm is proposed, capable of dramatically reducing the search tree size on a thermal-aware dispatching problem on multicore CPUs.

Deterministic Global Optimization with Artificial Neural Networks Embedded

The proposed method is based on relaxations of algorithms using McCormick relaxations in a reduced space employing the convex and concave envelopes of the nonlinear activation function for deterministic global optimization of optimization problems with artificial neural networks embedded.

Mixed-Integer Optimization with Constraint Learning

This paper presents a meta-analyses of the Fajemisin-Birbil-Denhertog model, a parallel version of the model used in the Bouchut-Boyaval model, which shows how the model changed over time from simple to efficient to effective and vice versa.

ReLU Networks as Surrogate Models in Mixed-Integer Linear Programs

ENTMOOT: A Framework for Optimization over Ensemble Tree Models

Partition-based formulations for mixed-integer optimization of trained ReLU neural networks

This paper introduces a class of mixed-integer formulations for trained ReLU neural networks by partitioning node inputs into a number of groups and forming the convex hull over the partitions via disjunctive programming.

Research and development of neural network ensembles: a survey

Different approaches on the development and the latest studies on NNE are summarized, followed by detailed descriptions of individual neural network generation method, conclusion generation method and fusion based on granular computing and NNE.

Strong mixed-integer programming formulations for trained neural networks

A generic framework is presented that provides a way to construct sharp or ideal formulations for the maximum of d affine functions over arbitrary polyhedral input domains and corroborate this computationally, showing that these formulations are able to offer substantial improvements in solve time on verification tasks for image classification networks.