Sarker 2003
Sarker 2003
RUHUL SARKER
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com
JOARDER KAMRUZZAMAN
GSCIT, Monash University, Churchill, Victoria, Australia
CHARLES NEWTON
School of ITEE, University of New South Wales, ADFA Campus
Canberra 2600, Australia
1. Introduction
Over the last two decades, Evolutionary Computation (EC), which is based on
the principle of evolution (survival of the fittest), has shown tremendous success in
solving complex real-world problems1,2 Engineers and scientists with quite different
backgrounds have worked together to tackle some of the most difficult problems
using a promising set of stochastic search algorithms — Evolutionary Algorithms
(EAs). Evolutionary computation is one of the current hot topics in the area of
computer science and optimization. Within the last decade, solving optimization
problems using EAs has become an important application area of the evolutionary
computation field. Due to its parallelism and some intelligent properties such as
311
March 2, 2004 9:40 WSPC/157-IJCIA 00105
2. What is EC?
Evolutionary computation is the study of computational systems which uses ideas
from natural evolution and adaptation. Many evolutionary computation techniques
March 2, 2004 9:40 WSPC/157-IJCIA 00105
get their ideas and inspirations from molecular evolution, population genetics, im-
munology, etc. Some of the terminology used in evolutionary computation has been
borrowed from these fields to reflect their connections, such as genetic algorithms,
crossover, mutation, genotypes, phenotypes, species, etc. Although the research into
evolutionary computation could help us understand some biological phenomena
better, its primary aim is not to build biologically plausible models. The primary
aim is to study and develop robust and efficient computational systems for solving
complex real-world problems.
Evolutionary algorithms (EAs) are known as stochastic search procedures. Be-
fore we describe the general framework of EAs, let us present the general framework
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com
Different algorithms use different strategies for the generation of the initial so-
lution, the perturbation and acceptance of the current solution.
EAs have two prominent features which distinguish themselves from other search
algorithms. First, they are all population-based (consider a set of possible solution
points in contrast to single solution point in conventional optimization). Second,
there is communications and information exchange among individuals in a pop-
ulation. A general framework of evolutionary algorithms can be summarized as
follows.
(1) Set i = 0;
(2) Generate the initial population P (i) at random;
(3) REPEAT
(a) Evaluate the fitness of each individual in P (i);
(b) Select parents from P (i) based on their fitness;
(c) Apply search operators to the parents and produce generation
P (i + 1);
(4) UNTIL the population converges or the maximum time is reached.
selection procedures will be discussed in a later section. It can be noted here that
there is no strict rule to use crossover and mutation to generate new solutions
in evolutionary computation.6 In principle, one may use any search procedure to
generate new solutions that will increase the probability of finding a global opti-
mum. This is also true for selection. This gives an opportunity for researchers, in
Operations Research and Optimization, to contribute to the field of evolutionary
computation.
Evolutionary computation can be divided into four major branches which are
briefly discussed in the following section.
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com
evolutionary programming and genetic algorithms, since they use almost the same
computational framework. This is the view taken by this paper.
for convergence and optimality. In this section, the suitability of using EC for op-
timization will be explored indicating the advantages and disadvantages from the
optimization point of view.
by FUDAN UNIVERSITY on 05/04/15. For personal use only.
Properties of Functions
Consideration of convexity/concavity and continuity of functions are not neces-
sary in EC, however, these are a real concern in most mathematical programming
techniques. In EC, the initial population (which is a set of solutions) is generated
randomly and then the subsequent generations (a new set of solutions) are usually
produced numerically (using simple rules) from their previous populations. It does
not matter whether the function is differentiable or not. Although both the domains
share the same concept of generate-and-test algorithm, the EC search procedure is
not directly comparable (no clear similarity) to the conventional search, even with-
out using derivatives, such as the cyclic coordinate method, the method of Hooke
and Jeeves, and Rosenbrock’s method (for these methods see Ref. 15). However,
there is no restriction on incorporating any conventional search procedure into EC
(say hybrid EC) to make the algorithm efficient.
Infeasibility
The mathematical programming techniques cannot, apparently, help in decision
making when the problem is infeasible. However, the incorporation of penalty
functions allows the identification of the infeasible constraints and hence helps to
March 2, 2004 9:40 WSPC/157-IJCIA 00105
Domain Knowledge
It is not difficult to implement EAs, because they do not require any rich domain
knowledge. However, domain knowledge can be incorporated into EC techniques. 6
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com
For example, the concept of conventional search techniques can replace the search
component in EAs. In addition, the final EAs solutions can be refined using lo-
cal search techniques. The generation of the initial population and the selection
by FUDAN UNIVERSITY on 05/04/15. For personal use only.
Robustness of EC Algorithms
Theoretically, one common structure of evolutionary algorithms can be imple-
mented for many single objective constrained mathematical programming models.
Any single penalty function-based evolutionary algorithm can be used to solve many
linear, integer and different classes of nonlinear programming models.17,20 However,
the algorithm may require different algorithmic parameters to solve different mod-
els. On the other hand, we need to learn a number of different and complex tech-
niques to solve these models using conventional optimization theory. For example,
the simplex method and interior point method for linear programs,21,22 and the
branch and bound method and cutting plane method for integer programs.23 There
are many different methods available for different classes of nonlinear programs. 24
Computational Time
The most favourable point of EAs is that they provide quick approximate solu-
tions. In most cases, EAs make a significant improvement within the first few
March 2, 2004 9:40 WSPC/157-IJCIA 00105
Multiobjective Optimization
EAs are more suitable for multiobjective optimization because of their capability
of simultaneous optimization of conflicting objective functions and generation of a
number of alternative solutions in a single run.27 – 29 Some multi-objective problems
can be solved, using conventional optimization techniques, developing a composite
objective where the linear combination of weights (assigned to the individual ob-
jective functions) provides the alternative solutions for the Pareto-frontier. 30 These
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com
non-uniform (higher density of solutions in one region than others), or the functions
are complex, periodic or multimodal, then EAs are the better option.31
Starting Solution
Unlike conventional search techniques, EC methods do not require any specialized
methodology to generate the initial solutions (for example, North-West Corner
Rule for the Transportation Problem). The initial population is usually generated
randomly in EC.
Harder Problems
In conventional optimization domains, integer programming is more complex than
linear programming, and nonlinear programming is even harder. In EC domains,
hard problems are not that hard, compared to easy problems. When solving op-
timization problems using EC, the integer requirements, combinatorial nature,
nonlinearity and other properties do not add much complexity (in general terms)
compared to the linear programming problem. However, solving LP by using EC
is not appropriate because of the power of existing conventional optimization algo-
rithms. In case of multi-modal problems, there are less chances of being trapped in
the local optima32 when using EC.
have identified the following cases where EC can contribute significantly better to
the field of optimization and operations research.
by FUDAN UNIVERSITY on 05/04/15. For personal use only.
6. Disadvantages of Using EC
There are several drawbacks to using EC based methodology for solving optimiza-
tion problems. They are briefly discussed below.
given problem.38
Convergence of EAs: Convergence pattern and time complexity, while using EC
based techniques for solving optimization problems, are case oriented too. Al-
by FUDAN UNIVERSITY on 05/04/15. For personal use only.
though some bounds are derived on the convergence rate of GAs, for specific
cases, these cannot be generalized for all the problems and the GAs applied to
these problems.39 – 41
Mathematical Insight: EAs do not help to examine the mathematical insight of the
optimization problems to be solved which could provide additional information
to the decision makers.
Sensitivity Analysis: Sensitivity analysis may be performed, for all type of
models, on the desired range though it would not be as efficient as LP
sensitivity.
GA outperformed Simulated Annealing (SA) for all the test problems in terms of
quality of solutions. However, SA requires much shorter computational time than
GA. Varela et al.25 used GAs for solving job shop scheduling problems with bot-
tlenecks. For a given problem, they have compared the solutions produced by three
different generations (Table 7, Ref. 25). The percentage deviations of the fitness
from the known best solution are 0.8330%, 0.0969% and 0.01259% for the num-
ber of generations 50, 200 and 1,000 respectively. The first 50 generations provide
a very good approximate solution (within 1%) which is really a favourable point
for using GAs for quick approximate solutions. It is well-known in EAs that the
most improvement is made within the first few generations and it is very slow after
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com
50 generations.
Chu and Beasley44 developed a genetic algorithm for solving the generalized
assignment problem, a well-known NP-complete combinatorial optimization prob-
by FUDAN UNIVERSITY on 05/04/15. For personal use only.
lem. They mainly investigated the quality of solutions produced rather than the
computational expense. Although GA is superior in terms of solution quality, the
computational times for some heuristics may be significantly shorter than those
given for GA. Beasley and Chu45 claimed similar results with GA for a set covering
problem which has no known polynomial time algorithms.
The integer programming problems are known as hard problems in the Oper-
ations Research/Optimization literature. Sakawa and Kato26 proposed a GA for
solving such problems and compare the performance with the branch-and-bound
(B&B) technique. In one set of experimental studies, they considered five instances
of a multidimensional 0–1 knapsack problem with three different degrees of strict-
ness of the constraints. The sizes of the problem instances are: (i) 30 variables
and 10 constraints, (ii) 50 variables and 20 constraints, (iii) 100 variables and 30
constraints, (iv) 150 variables and 40 constraints and (v) 200 variables and 50 con-
straints. We compare the objective function values and the computational times, of
these problem instances, for a degree of strictness of the constraints equal to 0.75.
The details of parameters used for GA, LP package, computer platform and other
information can be found in Ref. 26. As shown in the table, GAs are providing
optimal solutions for smaller problems (problem instances 1 and 2) and the quality
of solution is deteriorating (slowly) as the size of the problem instance increases.
However, the computational time, for GA compared to B&B, is more and more
appreciable as long as the size of the problem instance increases. This is a very
Table 1. Comparing GA and Branch & Bound technique for integer programming.
favourable point for using EAs in solving hard optimization problems. However,
further experimentation is required to generalize the findings.
As we can see in Table 1, the computational times required by GAs for the
problem instances 1, 2, 3, 4 and 5 are 69.33, 16.50, 0.422, 0.021 and 0.028 times,
respectively, of the computational time required by the B&B technique.
mediate/exchange points and k destinations exist. The sources have limited supply,
the exchange points have limited capacity to handle the supplied goods and the
destinations have known demand. All the supplies from the sources are transported
by FUDAN UNIVERSITY on 05/04/15. For personal use only.
to the destinations via the exchange points. The problem is to minimize the overall
transportation cost while satisfying supply, capacity, demand and flow constraints.
This problem can be formulated as a simple linear program (LP) as follows.
Notations:
Mathematical Model:
XX XX
Minimize Z = Cij xij + Cjk xjk
i j j k
X
xij ≤ Si
j
X
xij ≤ CPj
i
X
xjk ≥ Dk
j
X X
xij = xjk ≥ 0
i k
xij , xjk ≥ 0 .
The objective of the problem is to minimize the overall transportation costs. The
first, second and third constraints ensure the supply limitation, exchange point
March 2, 2004 9:40 WSPC/157-IJCIA 00105
capacity and demand requirements respectively. The fourth or the flow constraint
indicates that the total supply in the second stage of the network must be less than
or equal to the supply in the first stage.
Although the quantity transported (decision variables of the model) requires inte-
ger values, the structure of the LP can ensure integer values for variables, provided
supply, capacity and demand data are integer, without having any integer variables
in the model. In this case study, we intend to solve this model using GA. From
the previous study, it is found that the quality of solutions, while solving by GAs,
deteriorates with the tightness of the constraints.17,20 We have chosen this problem
mainly for two reasons: (i) this problem generates many tighter constraints which
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com
is not an easy test problem for GA application and (ii) the mathematical model of
this problem can be converted from a simple linear program to a difficult integer
programming test problem with a very minor modification.
by FUDAN UNIVERSITY on 05/04/15. For personal use only.
7.2. GA application
First we consider a simple real-coded GA, where the variable values were generated
as real/integer numbers not as binary-string. Usually the binary coding is recog-
nized as the most suitable encoding for any problem solution because it maximizes
the number of schemata being searched implicitly,5,12 but there have been many ex-
amples in the evolutionary computation literature where alternative representations
have resulted in algorithms with greater efficiency and optimization effectiveness
when applied to identical problems (see e.g. articles by Back and Schwefel46 and
Fogel and Stayton47 among others). Davis48 and Michalewicz7 comment that for
many applications real-values or other representations may be chosen to advantage
over binary coding. There does not appear to be any general benefit in maximiz-
ing implicit parallelism in evolutionary algorithms, and, therefore, forcing problems
to fit into a binary representation may not be recommended. As experimented by
Sarker et al.17,20 real coded GAs perform better than Binary coded GAs for integer
programming problems.
The following characteristics were considered in running the Simple GA (SGA):
Algorithm 1: Simple GA
The operator generates a single offspring x3 from two parents x1 and x2 according
to the following rule:
x3 = rounded[r · (x2 − x1 ) + x2 , ]
March 2, 2004 9:40 WSPC/157-IJCIA 00105
where r is a random number between 0 and 1, and the parent x2 is no worse than
x1 . Actually, when the generated value x3 is out of bounds (upper and lower
bounds) it is set equal to the corresponding violated bound. Also, note that x1
and x2 are “parent variables” not parents. Also x3 is rounded after crossing to
integer.
• Probability of crossover = 1.0, and Probability of mutation = up to 0.10.
• Penalty methods for constraints handling: dynamic.
• Selection for crossover: First rank all the individuals based on the objective func-
tion values. Then select randomly one from the top 10 and two from the rest.
Crossover was made between the first one and better of other two.
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com
The outputs of the algorithm were not satisfactory even for small problems with 3
sources, 2 exchange points and 3 destinations. We run the algorithm 30 times, each
by FUDAN UNIVERSITY on 05/04/15. For personal use only.
time for 300 generations. Most of the time, the algorithm produces too many infea-
sible solutions (sometimes all) and ended up either with infeasible or suboptimal
(very poor feasible) solutions. To overcome this problem, we modified the algorithm
to force the feasibility of offspring in each and every generation. We introduce the
modified algorithm as Modified GA (MGA).
Algorithm 2: Modified GA
• Population size 50.
• Initial population was generated randomly ensuring their feasibility.
• Heuristic crossover (as Algorithm 1) and no mutation.
• Probability of crossover = 1.0.
• Constraints handling: Ensure feasibility with repairing mechanism.
• Selection for crossover: First rank all the individuals based on the objective func-
tion values. Then select randomly one from the top 10 and two from the rest.
Crossover was made between the first one and best of the other two.
• Crossover: Once the parents are selected for crossover, choose a variable randomly
(for example, a path or supply amount from a source to an exchange point)
and perform crossover. If the variable is within the current range (supply and
capacity), we accept and reset the available supply and capacity for other vari-
ables. Otherwise, we set them at their extreme values. Once the supply is ex-
hausted from a source, all unassigned paths from that source will be assigned to
zero shipments without performing crossover operation. If any positive amount
is left in any source after assigning all the paths (originating from that source)
except one, then the entire amount will be allocated to that path without per-
forming any crossover if it is less than the remaining exchange point capacity.
Otherwise the minimum of available supply and exchange point capacity will be
allocated.
• Ensuring Feasibility: After performing crossover following the above procedure,
the problem could be infeasible in some cases because of the allocation without
March 2, 2004 9:40 WSPC/157-IJCIA 00105
crossover as mentioned above and zero allocation after crossover (if the allocation
is less than or equal to zero after crossover). These sorts of allocations may violate
the supply constraints. Similar violation may occur in the destination side. In
such a case, we apply a repair mechanism to ensure the feasibility in the final
allocation.
• Repair Mechanism: Identify the sources that have supplied less than their avail-
able amounts and determine the unassigned amounts that are available for supply.
Generate a list of exchange points which can receive more supplies. Randomly
select an exchange point from that list and select a source with unassigned goods,
and then assign to the exchange point as much as possible. Update the list of
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com
infeasibility.
data were generated arbitrarily random fulfilling the assumptions (i) the total sup-
by FUDAN UNIVERSITY on 05/04/15. For personal use only.
ply equals total demand (balanced problem) and (ii) the total capacity of the ex-
change points is greater than the total supply. The unbalanced problems can be
solved using the same GA either (i) by making the problem balanced with a dummy
source or destination as required or (ii) by allowing a minor change in the algorithm.
The number of variables and the number of constraints involved in the mathemat-
ical models of the test problems are also shown in Table 3.
We have solved the corresponding linear programming model using a commercial
optimization package LINDO/LINGO. The LP objective function value and the
outputs from GA are reported in Table 4. We have experimented on a Pentium4,
1.8 GHz, 256 MB RAM, IBM PC. The GA “Overall best” column indicates the best
objective function obtained from 30 individual runs. The 30 runs average means
the average of the best objective function values in 30 runs.
From columns 5 and 6 in Table 4, it is clear that MGA is providing optimal
solutions for smaller problems (problem # 1 and 2) but the quality of solution is
deteriorating with the increase of search space (larger problems). The solutions of
the larger problems are not acceptable due to high percentage deviation from the
optimal solution. It is observed that the quality of solutions is highly dependent
on the quality of initial the population. It is more likely to generate a better ini-
tial population for smaller search space (smaller problems) when we use a fixed
population size for all problems.
LP GA GA % Differ. % Differ. GA
Prob. Objective Overall 30 runs (GA best (GA Average 30 runs Average
No. (LINDO) Best Average with LP) with LP) (Time in seconds)
1 11700 11700 11700 0 0 0.019
2 67900 67900 67900 0 0 0.047
3 62500 62500 62500 0 0 5.82
4 147200 147200 147575 0 0.26 8.39
5 141700 142800 148555 0.7763 4.84 17.26
6 86750 87700 92505 1.0951 6.63 69.77
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com
(i) Set a list of sources with positive supply and destinations with unfulfilled
demand.
(ii) Select a source or destination at random from the list, and find the cheapest
route linked to transhipment points from that source/destination. Any tie will
be broken arbitrarily. Assign the maximum possible amount to that route.
Update the involved supply/demand and available capacity of transhipment
point. If the supply is exhausted (/demand is fulfilled), then the source
(/destination) would be removed from the list.
(iii) Repeat (ii) until all supplies are exhausted and requirements are fulfilled.
randomly in different places for the above seven test problems. The times required
in seconds, for the first six test problems are 1, 6, 21, 29, 3,302 and 1,088 respec-
tively. The last problem was interrupted intentionally since no feasible solution was
found within 72 hours. As we can see, the computational time for GA is much lower
compared to Conventional IP techniques for the test problems considered in this
paper. This investigation supports the fact that GAs would do better than conven-
tional optimization techniques for integer programming problems if we sacrifice the
quality of solutions.
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com
8. Conclusions
In this paper, we introduce evolutionary computation and analyze the similarities
by FUDAN UNIVERSITY on 05/04/15. For personal use only.
and differences with conventional search algorithms. The reasons for using EC in
solving optimization problems and the situations where EC could be beneficial are
briefly discussed. A brief review on different EAs and their components is provided.
The use of EAs by solving a case problem has been demonstrated.
In this paper, we developed a GA (known as Modified GA or MGA) for a
two-stage transportation problem. The problem is a standard linear programming
problem. The developed GA provides better solutions than simple GA. The exper-
imental results show that the time taken by MGA is somewhat polynomial. How-
ever the quality of solutions deteriorates with the increasing size of the problems.
The LP problems were then modified to generate IP test problems. Although the
LINDO/LINGO solves the IP test problems optimally, it takes much longer compu-
tational time than MGA which is an indication that GAs could be an appropriate
methodology for integer programming problems.
References
1. M. Fischer and Y. Leung, GeoComputational Modelling Techniques and Applications
(Springer-Verlag, Berlin, 2001).
2. W. Barnett, C. Chiarella, S. Keen, R. Marks and H. Schnabl, Complexity and
Evolution (Cambridge University Press, 2000).
3. X. Yao, Evolutionary Computation: Theory and Application (World Scientific Publ.
Co., Singapore, 1999) (edited).
4. M. Patel, V. Honavar and K. Balakrishnan, Advances in the Evolutionary Synthesis
of Intelligent Agents (The MIT Press, Cambridge, MA, 2001).
5. D. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning
(Addison-Wesley, MA, 1989).
6. X. Yao, Evolutionary computation: A gentle introduction, Evolutionary Optimization,
eds. R. Sarker, M. Mohammadian and X. Yao (Kluwer, USA, 2002) 27–56.
7. Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, 3rd
edn. (Springer-Verlag, New York, 1996).
8. H.-P. Schwefel, Numerical Optimization of Computer Models (Wiley, Chichester,
1981).
9. H.-P. Schwefel, Evolution and Optimum Seeking (Wiley, New York, 1995).
March 2, 2004 9:40 WSPC/157-IJCIA 00105
16. J. Seydel and D. L. Olson, Bids considering multiple criteria, J. Constr. Eng. Man-
agement 116, 4 (1990) 609–623.
17. R. Sarker, T. Runarsson and C. Newton, Genetic algorithms for solving a class of
constrained nonlinear integer programs, Int. Trans. Operat. Res., Blackwell’s, 8, 1
by FUDAN UNIVERSITY on 05/04/15. For personal use only.
(2001) 61–74.
18. D. Whitley, V. S. Gordon and K. Mathias, Lamarckian evolution, the Bladwin effect
and function optimization, Proc. 3rd Conf. Parallel Problem Solving Nature (1996),
19. Z. Michalewicz and M. Schmidt, Evolutionary algorithms and constrained optimiza-
tion, Chap. 3 Evolutionary Optimization, eds. R. Sarker, M. Mohammadian and
X. Yao (Kluwer, USA, 2001) 57–86.
20. R. Sarker, T. Runarsson and C. Newton, A constrained multiple raw materials man-
ufacturing batch sizing problem, Int. Trans. Operat. Res. 8, 2 (2001) 121–138.
21. F. Hillier and G. Lieberman, Introduction to Operations Research (McGraw-Hill,
Boston, 2001).
22. G. Sierksma, Linear and Integer Programming: Theory and Practice (Marcel Dekker,
Inc., New York, 1996).
23. R. Martin, Large Scale Linear and Integer Optimization (Kluwer Academic Publish-
ers, Boston, 1999).
24. H. Eiselt, G. Pederzoli and C. Sandblom, Continuous Optimization Models
(de Gruyter, Berlin, Germany, 1987).
25. R. Varela, C. Vela, J. Puente and A. Gomez, A knowledge-based evolutionary strategy
for scheduling problems with bottlenecks, Eur. J. Operat. Res. 145 (2003) 57–71.
26. M. Sakawa and K. Kato, Genetic algorithms with double strings for 0-1 programming
problems, Eur. J. Operat. Res. 144 (2003) 581–597.
27. R. Sarker, K. Liang and C. Newton, A new evolutionary algorithm for multiobjective
optimization, Eur. J. Operat. Res. 140 (2002) 12–23.
28. E. Zitzler, K. Deb and L. Thiele, Comparison of multiobjective evolutionary
algorithms: Empirical results, Evo. Comput. 8, 2 (2000) 173–195.
29. E. Zitzler and L. Thiele, Multiobjective evolutionary algorithms: A com-
parative case study and the strength Pareto approach, IEEE Trans. Evol.
Comput. 3, 4 (1999) 257–271.
30. K. Miettinen, Nonlinear Multiobjective Optimization (Kluwer’s International Series
in OR/MS, 1999).
31. K. Deb, Multi-objective genetic algorithms: Problem difficulties and construction of
test problems, Evol. Comput. 7, 3 (1999) 205–230.
32. K.-H. Liang, Evolutionary Optimization with Self-Adaptation and Landscape Approx-
imation, Ph.D Thesis, School of Computer Science, UNSW@ADFA, Canberra, Aus-
tralia, 2000.
33. P. Wide and H. Schellwat, Implementation of a genetic algorithm for routing an
autonomous robot, Robotica 15, 2 (1997) 207–211.
March 2, 2004 9:40 WSPC/157-IJCIA 00105
34. A. P. Kosoresow and M. P. Johnson, Finding worst-case instances of, and lower bounds
for, online algorithms using genetic algorithms, AI 2002 : Advances in Artificial In-
telligence, eds. B. McKay and J. Slaney, Lecture Nutes in Artificiae Intelligence 2557
(Springer, Berlin, 2002) 344–355.
35. K. De Jong, Evolving in a changing world, Z. Ras and A. Skoworn, Found. Intell.
Syst., Lecture Notes in Artificial Intelligence 1609, Springer (1999) 513–519.
36. J. Grefenstette, Evolvability in dynamic fitness landscape: A genetic algorithm
approach, IEEE Congr. Evol. Comput. (1999) 2031–2038.
37. R. Morrison and K. De Jong, A test problem generator for non-stationary environ-
ments, IEEE Congr. Evol. Comput. (1999) 2047–2053.
38. A. Eiben, R. Hinterding and Z. Michalewicz, Parameter control in evolutionary
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com