• Corpus ID: 220935975

Rethinking Default Values: a Low Cost and Efficient Strategy to Define Hyperparameters

@article{Mantovani2020RethinkingDV,
  title={Rethinking Default Values: a Low Cost and Efficient Strategy to Define Hyperparameters},
  author={Rafael Gomes Mantovani and Andr{\'e} Luis Debiaso Rossi and Edesio Alcobaça and Jadson Castro Gertrudes and Sylvio Barbon Junior and Andr{\'e} Carlos Ponce de Leon Ferreira de Carvalho},
  journal={ArXiv},
  year={2020},
  volume={abs/2008.00025}
}
Machine Learning (ML) algorithms have been successfully employed by a vast range of practitioners with different backgrounds. One of the reasons for ML popularity is the capability to consistently delivers accurate results, which can be further boosted by adjusting hyperparameters (HP). However, part of practitioners has limited knowledge about the algorithms and does not take advantage of suitable HP settings. In general, HP values are defined by trial and error, tuning, or by using default… 

Meta-learning for symbolic hyperparameter defaults

TLDR
This work proposes a zero-shot method to meta-learn symbolic default hyperparameter configurations that are expressed in terms of the properties of the dataset that enables a much faster, but still data-dependent, configuration of the ML algorithm, compared to standardhyperparameter optimization approaches.

Knowing and combating the enemy: a brief review on SARS-CoV-2 and computational approaches applied to the discovery of drug candidates

TLDR
Computer-aided drug design (CADD) approaches can be useful tools to the design and discovery of novel potential antiviral inhibitors against SARS-CoV-2, as drug repurposing and discovery remains a challenge.

Predicting the level of anemia among Ethiopian pregnant women using homogeneous ensemble machine learning algorithm

TLDR
The researcher decided to use cat boost algorithms with one versus the rest for further use in the development of artifacts, model deployment, risk factor analysis, and generating rules because it has registered better performance with 97.6% accuracy.

Diabetes Disease Prediction Model Deployment on Heroku-based Cloud Computing Platforms using Homogeneous Ensemble Machine Learning Algorithms

TLDR
The researcher decided to use cat boost algorithms for further use in the development of artifacts, model deployment, risk factor analysis, and generating rules because it has registered better performance with 90.32% accuracy.

References

SHOWING 1-10 OF 67 REFERENCES

Importance of Tuning Hyperparameters of Machine Learning Algorithms

TLDR
The results show that leaving particularhyperparameters at their default value is non-inferior to tuning these hyperparameters, and in some cases, leaving the hyperparameter at its default value even outperforms tuning it using a search procedure with a limited number of iterations.

Learning multiple defaults for machine learning algorithms

TLDR
It is shown that sets of defaults can improve performance while being easy to deploy in comparison to more complex methods and compared to random search and Bayesian optimization.

Tunability: Importance of Hyperparameters of Machine Learning Algorithms

TLDR
Tunability is defined as the amount of performance gain that can be achieved by setting the considered hyperparameter to the best possible value instead of the default value.

Sequential Model-Free Hyperparameter Tuning

TLDR
This work adapts the sequential model-based optimization by replacing its surrogate model and acquisition function with one policy that is optimized for the task of hyperparameter tuning and proposes a similarity measure for data sets that yields more comprehensible results than those using meta-features.

Meta-learning Recommendation of Default Hyper-parameter Values for SVMs in Classification Tasks

TLDR
The use of meta-learning to recommend default values for the induction of Support Vector Machine models for a new classification dataset is investigated and meta-models can accurately predict whether tool suggested or optimized default values should be used.

Collaborative hyperparameter tuning

TLDR
A generic method to incorporate knowledge from previous experiments when simultaneously tuning a learning algorithm on new problems at hand is proposed and is demonstrated in two experiments where it outperforms standard tuning techniques and single-problem surrogate-based optimization.

Practical Bayesian Optimization of Machine Learning Algorithms

TLDR
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.

Using Metalearning to Predict When Parameter Optimization Is Likely to Improve Classification Accuracy

TLDR
It is shown that a relatively simple and efficient landmarker carries significant predictive power, and metalearning for algorithm selection should be effected in two phases, the first in which one determines whether parameter optimization is likely to increase accuracy, and the second in which algorithm selection actually takes place.

Meta learning for defaults: symbolic defaults

TLDR
This work proposes to automatically learn sets of symbolic default hyperparameter configurations, i.e., formulas containing meta-features, from a large set of prior evaluations of numeric hyperparameters on multiple data sets via symbolic regression and optimization.
...