The Trade-off between Solution Quality and Computing Times of Intelligent Algorithms − A Computational Study on the Role of Parameters and Time Budgets

Abstract

Many heuristic search methods have been derived by analogy from natural processes and applied to practical optimization problems recently. Considering the variety of methods available, the question arises how "good" or "bad" the methods are in relation to each other. To answer this question, a large series of test runs was performed. The underlying application problem is a sequencing problem. The methods under study are: simulated annealing, tabu search, threshold accepting, genetic algorithms, and parallel recombinative simulated annealing. It is shown that the performance of all methods − in terms of both solution quality and computing time − depends strongly on their parameters. Since the computing times needed vary considerably, a second test series was designed. All methods were allowed to use the same runtime budget. The obvious trade-off between the solution quality desired and the computing time allowed is discussed. 1 Purpose of the Study In this paper, several heuristic methods are examined with respect to a sequencing problem. Examples of sequencing problems occuring in practice are job-shop scheduling, the traveling salesman problem, and placement of standard cells on VLSI chips. The methods under study were mostly derived by analogy from nature. They are: simulated annealing, tabu search, threshold accepting, genetic algorithms, and a hybrid called parallel recombinative simulated annealing. For prospective users in practice it may be difficult to find out which method is the "best" one. When only limited time to compute a solution is available, the question of optimality has to be looked at from a different angle. In a leitstand system [8] for short-term production planning, for example, a new schedule has to be available within seconds or minutes. Operations cannot wait for hours or days until an optimal solution is computed, thus a "good" solution obtained quickly is superior to an optimal one available too late. Furthermore, no method can be judged as such because the behavior of any method depends strongly on the respective parameters. Therefore, the following questions will be addressed subsequently: 1. Which effects on the behavior of the methods do their parameters have? 2. Taking time constraints and probabilistic behaviour into consideration, is it better to run a slow method just a few times, or is it better to run a fast method many times? 3. How stable are the methods (i.e. can it be expected that solution quality and/or CPU time will be about the same whenever the same method is applied)? To compare the methods and to answer those questions, a computational study was performed. The underlying sequencing problem is placement of standard cells in VLSI design. This problem has been described in detail in the literature [10, 11]. For a given number of cells with their dimensions and a given netlist (i.e. which cells have to connected?), the task is to place the cells on the given layout in such a way that the total length of interconnecting wires will be minimal [11]. The results described subsequently are based on approximately 220,000 test runs performed over a time period of about 8 months (elapsed computing time on four Sun SparcStations20 running in parallel). A set of eight test scenarios was used, comprising standard cell placement problems of various sizes (from 20 to 100 cells) and various complexities (from simple to complicated netlists). 2 Effects of Parameters on Performance of the Methods In this section, the main effects of the parameters of the methods on their performance are briefly discussed. An extensive study is presented elsewhere [7]. Performance includes both solution quality, i.e. values of the objective function (wire-length), and computing time. (1) In simulated annealing [1, 6], the user has to set parameters which refer to the cooling schedule. The cooling factors used in our tests were 0.99 (SA-1), 0.999 (SA-2), 0.9999 (SA3), and 0.99999 (SA-4). Each version was run 20 times for each of the 8 scenarios. Both the solution quality and the computing times exhibit a wide distribution. SA-1 and SA-2 are quite fast, whereas SA-3 and SA-4 take rather long. However, the solutions calculated by the latter ones are much better. This effect is shown in fig. 1. (2) Tabu search: Important parameters are the number of neighbors to be searched in one iteration and the length of the tabu list [4, 13]. Furthermore, the number of iterations without change (i.e. after how many iterations in which no improvement of the current solution has been found should the algorithm terminate?) has to be set. We tested six combinations of parameter values (TS-1 to TS-6). TS-3 to TS-6 were found to produce best solutions, but TS-1 and TS-2 were by orders of magnitude faster (see fig. 1). Again a trade-off between computing time allowed and solution quality obtained is evident. (3) Threshold accepting [3] uses a threshold value to decide whether a new solution will be kept or not. When a new solution has been accepted, the threshold is reduced. Parameters are thus the initial threshold, the reduction factor, and in addition the number of neighbors to be examined in one iteration and the number of iterations without change of the objective function, i.e. the stop criterion. Seven combinations of parameters (TA-1 to TA-7) were tested. As illustrated in fig. 1, again some versions need a lot of computing time but produce good solutions whereas others are fast but at the expense of worse solution quality. (4) Genetic algorithms [2, 5] have quite a large number of parameters. In the context of this paper, only three are considered: the cross-over rate, the population size, and the number of generations. Based on combinations of those parameters, four versions (GA-1 to GA-4) were tested. All of them performed relatively badly, but the trade-off between solution quality and computing time can still be observed in fig. 1. (5) Parallel recombinative simulated annealing (PRSA) is a hybrid method based on concepts of simulated annealing and genetic algorithms [9]. Since constructing new populations is based on Boltzmann trials, a selection strategy (i.e. which candidates should compete against each other and how are the Average deviation from best average wire-length in per cent 0,00% 5,00% 10,00% 15,00% 20,00% 25,00% 30,00% 10 10

2 Figures and Tables

Cite this paper

@inproceedings{Kurbel1998TheTB, title={The Trade-off between Solution Quality and Computing Times of Intelligent Algorithms − A Computational Study on the Role of Parameters and Time Budgets}, author={Karl Kurbel}, year={1998} }