Corpus ID: 2963444

Bayesian Hyperparameter Optimization for Ensemble Learning

@article{Levesque2016BayesianHO,
  title={Bayesian Hyperparameter Optimization for Ensemble Learning},
  author={Julien-Charles Levesque and Christian Gagn{\'e} and R. Sabourin},
  journal={ArXiv},
  year={2016},
  volume={abs/1605.06394}
}
In this paper, we bridge the gap between hyperparameter optimization and ensemble learning by performing Bayesian optimization of an ensemble with regards to its hyperparameters. Our method consists in building a fixed-size ensemble, optimizing the configuration of one classifier of the ensemble at each iteration of the hyperparameter optimization algorithm, taking into consideration the interaction with the other models when evaluating potential performances. We also consider the case where… Expand
Simultaneous Ensemble Generation and Hyperparameter Optimization for Regression
TLDR
A method for simultaneously tuning hyperparameters and generating an ensemble, by explicitly optimizing parameters in an ensemble context is devised, which consistently outperform single optimized models and can outperform or match the performance of state of the art ensemble generation techniques. Expand
Bayesian hyperparameter optimization : overfitting, ensembles and conditional spaces
TLDR
It is shown that there is indeed overfitting during the optimization of hyperparameters, even with cross-validation strategies, and that it can be reduced by methods such as a reshuffling of the training and validation splits at every iteration of the optimization. Expand
Thesis proposal Modeling Diversity in the Machine Learning Pipeline
Randomness is a foundation on which many aspects of the machine learning pipeline are built. From training models with stochastic gradient descent to tuning hyperparameters with random search,Expand
BOHB: Robust and Efficient Hyperparameter Optimization at Scale
TLDR
This work proposes a new practical state-of-the-art hyperparameter optimization method, which consistently outperforms both Bayesian optimization and Hyperband on a wide range of problem types, including high-dimensional toy functions, support vector machines, feed-forward neural networks, Bayesian Neural networks, deep reinforcement learning, and convolutional neural networks. Expand
An empirical study on hyperparameter tuning of decision trees
TLDR
This paper provides a comprehensive approach for investigating the effects ofhyperparameter tuning on three Decision Tree induction algorithms, CART, C4.5 and CTree, and finds that tuning a specific small subset of hyperparameters contributes most of the achievable optimal predictive performance. Expand
Hyper-Parameter Optimization Using MARS Surrogate for Machine-Learning Algorithms
TLDR
A novel efficient hyper-parameter optimization algorithm is proposed (called MARSAOP), in which multivariate spline functions are used as surrogate and dynamic coordinate search approach is employed to generate the candidate points. Expand
A Comparative Study on Automatic Model and Hyper-Parameter Selection in Classifier Ensembles
TLDR
Findings indicate the use ofhyper-parameter selection applied to Random Forest might generate more accurate systems compared to model and hyper-parameters selection. Expand
A study of model and hyper-parameter selection strategies for classifier ensembles: a robust analysis on different optimization algorithms and extended results
TLDR
A wide and robust comparative analysis of both approaches for Classifier Ensembles indicates that the use of a hyper-parameter selection method provides the most accurate classifier ensembles, but this improvement was not detected by the statistical test. Expand
Deep Learning on Active Sonar Data Using Bayesian Optimization for Hyperparameter Tuning
  • H. Berg, K. Hjelmervik
  • Computer Science
  • 2020 25th International Conference on Pattern Recognition (ICPR)
  • 2021
TLDR
Bayesian optimization is used to search for good values for some of the hyperparameters, like topology and training parameters, resulting in performance superior to earlier trial-and-error based training. Expand
A Novel Evolutionary Algorithm for Automated Machine Learning Focusing on Classifier Ensembles
TLDR
This work proposes a new Evolutionary Algorithm for the Auto-ML task of automatically selecting the best ensemble of classifiers and their hyper-parameter settings for an input dataset and obtained significantly smaller classification error rates than that Auto-WEKA version. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 27 REFERENCES
Initializing Bayesian Hyperparameter Optimization via Meta-Learning
TLDR
This paper mimics a strategy human domain experts use: speed up optimization by starting from promising configurations that performed well on similar datasets, and substantially improves the state of the art for the more complex combined algorithm selection and hyperparameter optimization problem. Expand
Sequential Model-Based Ensemble Optimization
TLDR
This paper proposes an extension of SMBO methods that automatically constructs ensembles of learned models that builds on a recently proposed ensemble construction paradigm known as agnostic Bayesian learning and confirms the success of this proposed approach, which is able to outperform model selection with SMBO. Expand
Practical Bayesian Optimization of Machine Learning Algorithms
TLDR
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms. Expand
Agnostic Bayesian Learning of Ensembles
TLDR
This approach uses a prior directly on the performance of predictors taken from a finite set of candidates and attempts to infer which one is best, and has the advantage of not requiring that the predictors be probabilistic themselves. Expand
Predictive Entropy Search for Efficient Global Optimization of Black-box Functions
TLDR
This work proposes a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES), which codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution. Expand
Scalable Bayesian Optimization Using Deep Neural Networks
TLDR
This work shows that performing adaptive basis function regression with a neural network as the parametric form performs competitively with state-of-the-art GP-based approaches, but scales linearly with the number of data rather than cubically, which allows for a previously intractable degree of parallelism. Expand
Freeze-Thaw Bayesian Optimization
TLDR
This paper develops a dynamic form of Bayesian optimization for machine learning models with the goal of rapidly finding good hyperparameter settings and provides an information-theoretic framework to automate the decision process. Expand
Ensemble selection from libraries of models
TLDR
A method for constructing ensembles from libraries of thousands of models using forward stepwise selection to be optimized to performance metric such as accuracy, cross entropy, mean precision, or ROC Area is presented. Expand
Diversity in search strategies for ensemble feature selection
TLDR
It is shown that, in some cases, the ensemble feature selection process can be sensitive to the choice of the diversity measure, and that the question of the superiority of a particular measure depends on the context of the use of diversity and on the data being processed. Expand
Bagging Ensemble Selection
TLDR
A novel variant of ensemble selection: bagging ensemble selection is presented and three variations of the proposed algorithm are compared to the original ensemble selection algorithm and other ensemble algorithms. Expand
...
1
2
3
...