Generalization in portfolio-based algorithm selection

@inproceedings{Balcan2020GeneralizationIP,
  title={Generalization in portfolio-based algorithm selection},
  author={Maria-Florina Balcan and Tuomas Sandholm and Ellen Vitercik},
  booktitle={AAAI},
  year={2020}
}
Portfolio-based algorithm selection has seen tremendous practical success over the past two decades. This algorithm configuration procedure works by first selecting a portfolio of diverse algorithm parameter settings, and then, on a given problem instance, using an algorithm selector to choose a parameter setting from the portfolio with strong predicted performance. Oftentimes, both the portfolio and the algorithm selector are chosen using a training set of typical problem instances from the… 

Figures from this paper

How much data is sufficient to learn high-performing algorithms? generalization guarantees for data-driven algorithm design

This work provides a broadly applicable theory for deriving generalization guarantees that bound the difference between the algorithm’s average performance over the training set and its expected performance on the unknown distribution and uncovers a unifying structure which is used to prove extremely general guarantees.

Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning

PoSH Auto-sklearn is developed, which enables AutoML systems to work well on large datasets under rigid time limits using a new, simple and meta-feature-free meta-learning technique and employs a successful bandit strategy for budget allocation.

References

SHOWING 1-10 OF 43 REFERENCES

Hydra: Automatically Configuring Algorithms for Portfolio-Based Selection

Hydra is a novel technique for combining these two methods, thereby realizing the benefits of both, and is primarily intended for use in problem domains for which an adequate set of candidate solvers does not already exist.

A PAC Approach to Application-Specific Algorithm Selection

Concepts from statistical and online learning theory are adapted to reason about application-specific algorithm selection, and dimension notions from statistical learning theory, historically used to measure the complexity of classes of binary- and real-valued functions, are relevant in a much broader algorithmic context.

Automatic construction of optimal static sequential portfolios for AI planning and beyond

Learning to Optimize Computational Resources: Frugal Training with Generalization Guarantees

This work provides an algorithm that learns a finite set of promising parameters from within an infinite set, which can help compile a configuration portfolio, or be used to select the input to a configuration algorithm for finite parameter spaces.

Dispersion for Data-Driven Algorithm Design, Online Learning, and Private Optimization

This work provides upper and lower bounds on regret for algorithm selection in online settings, and presents general techniques for optimizing the sum or average of piecewise Lipschitz functions when the underlying functions satisfy a sufficient and general condition called dispersion.

The IBaCoP Planning System: Instance-Based Configured Portfolios

This work creates a per-instance configurable portfolio, which is able to adapt itself to every planning task, and defines different portfolio strategies to combine the knowledge generated by the models.

Learning to Branch

It is shown how to use machine learning to determine an optimal weighting of any set of partitioning procedures for the instance distribution at hand using samples from the distribution, and it is proved that this reduction can even be exponential.

Algorithm runtime prediction: Methods & evaluation

How much data is sufficient to learn high-performing algorithms? generalization guarantees for data-driven algorithm design

This work provides a broadly applicable theory for deriving generalization guarantees that bound the difference between the algorithm’s average performance over the training set and its expected performance on the unknown distribution and uncovers a unifying structure which is used to prove extremely general guarantees.

SATzilla: Portfolio-based Algorithm Selection for SAT

SATzilla is described, an automated approach for constructing per-instance algorithm portfolios for SAT that use so-called empirical hardness models to choose among their constituent solvers and is improved by integrating local search solvers as candidate solvers, by predicting performance score instead of runtime, and by using hierarchical hardness models that take into account different types of SAT instances.