Corpus ID: 9069671

Selecting Near-Optimal Learners via Incremental Data Allocation

@inproceedings{Sabharwal2016SelectingNL,
  title={Selecting Near-Optimal Learners via Incremental Data Allocation},
  author={Ashish Sabharwal and Horst Samulowitz and G. Tesauro},
  booktitle={AAAI},
  year={2016}
}
We study a novel machine learning (ML) problem setting of sequentially allocating small subsets of training data amongst a large set of classifiers. The goal is to select a classifier that will give near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. This is motivated by large modern datasets and ML toolkits with many combinations of learning algorithms and hyper-parameters. Inspired by the principle of "optimism under uncertainty," we propose… Expand
Population Based Training of Neural Networks
An empirical study on hyperparameter tuning of decision trees
Tuning Hyperparameters without Grad Students: Scaling up Bandit Optimisation
A System for Massively Parallel Hyperparameter Tuning
Hyper-parameter Tuning under a Budget Constraint
Massively Parallel Hyperparameter Tuning
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 34 REFERENCES
Initializing Bayesian Hyperparameter Optimization via Meta-Learning
Practical Bayesian Optimization of Machine Learning Algorithms
Active Learning with Model Selection
Algorithms for Hyper-Parameter Optimization
Analysis of Thompson Sampling for the Multi-armed Bandit Problem
An empirical comparison of supervised learning algorithms
Algorithm runtime prediction: Methods & evaluation
...
1
2
3
4
...