First, the covariance matrix adaptation (CMA) with rank-one update is introduced into the (1+1)-evolution strategy. An improved implementation of the 1/5-th success rule is proposed for step size adaptation, which replaces cumulative path length control. Second, an incremental Cholesky update for the covariance matrix is developed replacing the… (More)
Randomized direct search algorithms for continuous domains, such as Evolution Strategies, are basic tools in machine learning. They are especially needed when the gradient of an objective function (e.g., loss, energy, or reward function) cannot be computed or estimated efficiently. Application areas include supervised and reinforcement learning as well as… (More)
Designing supervised learning systems is in general a multi-objective optimization problem. It requires finding appropriate trade-offs between several objectives , for example between model complexity and accuracy or sensitivity and specificity. We consider the adaptation of kernel and regularization parameters of support vector machines (SVMs) by means of… (More)
The evaluation of a standard Gaussian process regression model takes time linear in the number of training data points. In this paper, the models are approximated in the feature space after training. It is empirically shown that the time required for evaluation can be drastically reduced without considerable loss in performance.
The multi-objective covariance matrix adaptation evolution strategy (MO-CMA-ES) combines a mutation operator that adapts its search distribution to the underlying optimization problem with multi-criteria selection. Here, a generational and two steady-state selection schemes for the MO-CMA-ES are compared. Further, a recently proposed method for… (More)
Computer vision for object detection often relies on complex classifiers and large feature sets to achieve high detection rates. But when real-time constraints have to be met, for example in driver assistance systems, fast classifiers are required. Here we consider the design of a computation-ally efficient system for pedestrian detection. We propose an… (More)
Trained support vector machines (SVMs) have a slow run-time classification speed if the classification problem is noisy and the sample data set is large. Approximating the SVM by a more sparse function has been proposed to solve to this problem. In this study, different variants of approximation algorithms are empirically compared. It is shown that gradient… (More)