Thomas Philip Runarsson

Learn More
Penalty functions are often used in constrained optimization. However, it is very difficult to strike the right balance between objective and penalty functions. This paper introduces a novel approach to balance objective and penalty functions stochastically, i.e., stochastic ranking, and presents a new view on penalty function methods in terms of the(More)
— A common approach to constraint handling in evolutionary optimization is to apply a penalty function to bias the search towards a feasible solution. It has been proposed that the subjective setting of various penalty parameters can be avoided using a multi-objective formulation. This paper analyses and explains in depth why and when the multi-objective(More)
—Two learning methods for acquiring position evaluation for small Go boards are studied and compared. In each case the function to be learned is a position-weighted piece counter and only the learning method differs. The methods studied are temporal difference learning (TDL) using the self-play gradient-descent method and coevolutionary learning, using an(More)
— This paper compares the use of temporal difference learning (TDL) versus co-evolutionary learning (CEL) for acquiring position evaluation functions for the game of Othello. The paper provides important insights into the strengths and weaknesses of each approach. The main findings are that for Othello, TDL learns much faster than CEL, but that properly(More)
The paper describes an evolutionary algorithm for the general nonlinear programming problem using a surrogate model. Surrogate models are used in optimization when model evaluation is expensive. Two surrogate models are implemented, one for the objective function and another for a penalty function based on the constraint violations. The proposed method uses(More)
— The paper describes the approximation of an evolution strategy using stochastic ranking for nonlinear programming. The aim of the approximation is to reduce the number of function evaluations needed during search. This is achieved using two surrogate models, one for the objective function and another for a penalty function based on the constraint(More)
—Coevolution is a natural choice for learning in problem domains where one agent's behaviour is directly related to the behaviour of other agents. However, there is a known tendency for coevolution to produce mediocre solutions. One of the main reasons for this is cycling, caused by intransitivities among a set of players. In this paper we explore the link(More)
— Effective use of support vector machines (SVMs) in classification necessitates the appropriate choice of a kernel. Designing problem specific kernels involves the definition of a similarity measure, with the condition that kernels are positive semi-definite (PSD). An alternative approach which places no such restrictions on the similarity measure is to(More)