A Principled Method for the Creation of Synthetic Multi-fidelity Data Sets

@article{Fare2022APM,
  title={A Principled Method for the Creation of Synthetic Multi-fidelity Data Sets},
  author={Clyde Fare and Peter Fenner and Edward O. Pyzer-Knapp},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.05667}
}
Multifidelity and multioutput optimisation algorithms are of active interest in many areas of computational design as they allow cheaper computational proxies to be used intelligently to aid experimental searches for high performing species. Characterisation of these algorithms involves benchmarks that typically either use analytic functions or existing multifidelity datasets. However, analytic functions are often not representative of relevant problems, while preexisting datasets do not allow… 

Figures from this paper

References

SHOWING 1-10 OF 20 REFERENCES

MF2: A Collection of Multi-Fidelity Benchmark Functions in Python

The field of (evolutionary) optimization algorithms often works with expensive black-box optimization problems. However, for the development of novel algorithms and approaches, real-world problems

Multi-fidelity Bayesian Optimisation with Continuous Approximations

This work develops a Bayesian optimisation method, BOCA, that achieves better regret than than strategies which ignore the approximations and outperforms several other baselines in synthetic and real experiments.

HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO

HOBench allows to run this extendable set of multi-fidelity HPO benchmarks in a reproducible way by isolating and packaging the individual benchmarks in containers, and provides surrogate and tabular benchmarks for computationally affordable yet statistically sound evaluations.

Sequential kriging optimization using multiple-fidelity evaluations

The proposed extension of the sequential kriging optimization method, surrogate systems are exploited to reduce the total evaluation cost and manifests sensible search patterns, robust performance, and appreciable reduction in total evaluation costs as compared to the original method.

A Generic Test Suite for Evolutionary Multifidelity Optimization

Simulation results indicate that the use of changing fidelity is able to enhance the performance and reduce the computational cost of the PSO, which is desired in solving expensive optimization problems.

A General Framework for Multi-fidelity Bayesian Optimization with Gaussian Processes

This paper proposes MF-MI-Greedy, a principled algorithmic framework for addressing multi-fidelity Bayesian optimization with complex structural dependencies among multiple outputs, and proposes a simple notion of regret which incorporates the cost of different fidelities.

Spectral Mixture Kernels for Multi-Output Gaussian Processes

A parametric family of complex-valued cross-spectral densities is proposed and Cramer's Theorem is built on to provide a principled approach to design multivariate covariance functions.

Multi-task Gaussian Process Prediction

A model that learns a shared covariance function on input-dependent features and a "free-form" covariance matrix over tasks allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training.

COCO: a platform for comparing continuous optimizers in a black-box setting

Underlying fundamental concepts of COCO such as the definition of a problem as a function instance, the underlying idea of instances, the use of target values, and runtime defined by the number of function calls as the central performance measure are detailed.

Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks

This work presents a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP), which works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients.