0% found this document useful (0 votes)
13 views20 pages

Sarker 2003

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views20 pages

Sarker 2003

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

March 2, 2004 9:40 WSPC/157-IJCIA 00105

International Journal of Computational Intelligence and Applications


Vol. 3, No. 4 (2003) 311–330
c Imperial College Press

EVOLUTIONARY OPTIMIZATION (EvOpt): A BRIEF


REVIEW AND ANALYSIS

RUHUL SARKER
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

School of ITEE, University of New South Wales, ADFA Campus


Canberra 2600, Australia
ruhul@cs.adfa.edu.au
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

JOARDER KAMRUZZAMAN
GSCIT, Monash University, Churchill, Victoria, Australia

CHARLES NEWTON
School of ITEE, University of New South Wales, ADFA Campus
Canberra 2600, Australia

Received 8 March 2003


Revised 25 October 2003
Accepted 17 November 2003

Evolutionary Computation (EC) has attracted increasing attention in recent years, as


powerful computational techniques, for solving many complex real-world problems. The
Operations Research (OR)/Optimization community is divided on the acceptability of
these techniques. One group accepts these techniques as potential heuristics for solving
complex problems and the other rejects them on the basis of their weak mathematical
foundations. In this paper, we discuss the reasons for using EC in optimization. A brief
review of Evolutionary Algorithms (EAs) and their applications is provided. We also
investigate the use of EAs for solving a two-stage transportation problem by designing a
new algorithm. The computational results are analyzed and compared with conventional
optimization techniques.

Keywords: Evolutionary computation; evolutionary algorithms; optimization; search;


evolutionary optimization and computation complexity.

1. Introduction
Over the last two decades, Evolutionary Computation (EC), which is based on
the principle of evolution (survival of the fittest), has shown tremendous success in
solving complex real-world problems1,2 Engineers and scientists with quite different
backgrounds have worked together to tackle some of the most difficult problems
using a promising set of stochastic search algorithms — Evolutionary Algorithms
(EAs). Evolutionary computation is one of the current hot topics in the area of
computer science and optimization. Within the last decade, solving optimization
problems using EAs has become an important application area of the evolutionary
computation field. Due to its parallelism and some intelligent properties such as

311
March 2, 2004 9:40 WSPC/157-IJCIA 00105

312 R. Sarker, J. Kamruzzaman & C. Newton

self-organization, adaptation and self-learning, EC has been applied successfully to


many problems where the classical approaches are either unavailable or generally
lead to unsatisfactory results. In recent years, the interest in EC has been growing
dramatically.3,4
The great success of EC was first recognized in the 1980s, when extremely com-
plex optimization problems from various disciplines were solved, thus facilitating
the undeniable breakthrough of EC as a problem-solving methodology.5 This break-
through has been reflected through the growing number of publications in this field
and a corresponding increase in specialized conferences and journals. Despite these
conferences and journals, a large amount of application-specific work is widely scat-
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

tered throughout the publications of many diverse disciplines, including Operations


Research (OR)/Optimization, and presented at their related conferences, thus re-
flecting the general applicability and success of EC methods.
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

Solving optimization problems using evolutionary computation is known as Evo-


lutionary Optimization (EvOpt). The huge popularity of EvOpt, in many other
disciplines, encourages us to draw it to the OR/optimization research community’s
attention. As we have experienced, the OR/Optimization researchers and practi-
tioners are divided on the use of EC methods for solving optimization problems.
OR/Optimization researchers and practitioners use, very often, heuristics to find
near optimal solutions for many large and complex problems. EC methods are ba-
sically heuristics. EC methods do not have any strong mathematical basis, as the
conventional optimization techniques have, and it is difficult to guarantee the con-
vergence of the algorithms (for all problems) in a finite number of steps. In this
paper, we introduce EC, in general, and analyze the similarities and differences to
conventional search techniques. We discuss the reasons for using EC in optimization
and the situations where EC could be useful. A brief review of the different EAs
and their applications is provided. We also investigate the use of EAs for solving
optimization problems. A new genetic algorithm has been developed for solving a
two-stage transportation problem. The developed GA is discussed briefly, and the
computational results are analyzed and compared with the conventional optimiza-
tion techniques.
The organization of the paper is as follows: following this introduction, this pa-
per briefly introduces evolutionary computation as an optimization problem solving
tool. In Sec. 3, different EAs are discussed briefly, and the issue of using EC in opti-
mization is explained in Sec. 4. Section 5 discusses the situations where EC could be
useful in optimization. In Sec. 6, the disadvantages of using EC are presented. We
discuss the use of EAs for solving optimization problems and a new GA for solving
a two-stage transportation problem in Sec. 7. The computational experiences are
also presented in Sec. 7. Finally, conclusions are drawn in Sec. 8.

2. What is EC?
Evolutionary computation is the study of computational systems which uses ideas
from natural evolution and adaptation. Many evolutionary computation techniques
March 2, 2004 9:40 WSPC/157-IJCIA 00105

Evolutionary Optimization (EvOpt): A Brief Review and Analysis 313

get their ideas and inspirations from molecular evolution, population genetics, im-
munology, etc. Some of the terminology used in evolutionary computation has been
borrowed from these fields to reflect their connections, such as genetic algorithms,
crossover, mutation, genotypes, phenotypes, species, etc. Although the research into
evolutionary computation could help us understand some biological phenomena
better, its primary aim is not to build biologically plausible models. The primary
aim is to study and develop robust and efficient computational systems for solving
complex real-world problems.
Evolutionary algorithms (EAs) are known as stochastic search procedures. Be-
fore we describe the general framework of EAs, let us present the general framework
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

of a conventional, known as generate-and-test, search algorithm.6


by FUDAN UNIVERSITY on 05/04/15. For personal use only.

(1) Generate an initial solution and denote it as the current solution;


(2) Generate the next solution from the current one by perturbation;
(3) Test whether the newly generated solution is acceptable;
(a) If yes, accept it as the current solution;
(b) Otherwise keep the current solution unchanged.
(4) Go to Step (2) if the current solution is not satisfactory; stop otherwise.

Different algorithms use different strategies for the generation of the initial so-
lution, the perturbation and acceptance of the current solution.
EAs have two prominent features which distinguish themselves from other search
algorithms. First, they are all population-based (consider a set of possible solution
points in contrast to single solution point in conventional optimization). Second,
there is communications and information exchange among individuals in a pop-
ulation. A general framework of evolutionary algorithms can be summarized as
follows.

(1) Set i = 0;
(2) Generate the initial population P (i) at random;
(3) REPEAT
(a) Evaluate the fitness of each individual in P (i);
(b) Select parents from P (i) based on their fitness;
(c) Apply search operators to the parents and produce generation
P (i + 1);
(4) UNTIL the population converges or the maximum time is reached.

EAs can be regarded as a population-based version of generate-and-test search.


They use search operators like crossover and mutation to generate new solutions,
and use selection to test which solutions are better. The search operators and the
March 2, 2004 9:40 WSPC/157-IJCIA 00105

314 R. Sarker, J. Kamruzzaman & C. Newton

selection procedures will be discussed in a later section. It can be noted here that
there is no strict rule to use crossover and mutation to generate new solutions
in evolutionary computation.6 In principle, one may use any search procedure to
generate new solutions that will increase the probability of finding a global opti-
mum. This is also true for selection. This gives an opportunity for researchers, in
Operations Research and Optimization, to contribute to the field of evolutionary
computation.
Evolutionary computation can be divided into four major branches which are
briefly discussed in the following section.
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

3. Different EAs: A Brief Introduction


Evolutionary computation encompasses four major branches, i.e. evolution strate-
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

gies, evolutionary programming, genetic algorithms and genetic programming, due


to historical reasons. At the philosophical level, they differ mainly in the level at
which they simulate evolution. At the algorithmic level, they differ mainly in their
representations of potential solutions and the operators used to modify the solu-
tions. From a computational point of view, representation and search are two key
issues. In this paper, we will briefly look at their differences at the algorithmic level.
Before we discuss the different evolutionary algorithms, it is appropriate to
indicate that there exist different representations, search operators and selection
procedures. Different EAs use different representations for the individuals. A good
representation will make a problem easier to solve and a poor representation will
do the opposite. There are three categories of representations: Lists, Trees and
Graphs. For mathematical optimization problems, we are mainly interested in the
lists representation. Lists contain a number of different methods such as binary
strings, real-valued vectors, integer vectors, symbolic (character or digit) strings,
etc. The details of these representations can be found in Ref. 7. Representation is
always best designed in conjunction with search operators. Search operators are
the key players in the reproduction of offspring. The search operators used in the
literature are listed below.
• Crossover/Recombination: k-point crossover, uniform crossover, intermediate
crossover, global discrete crossover, order-based recombinations, matrix-based
recombinations, etc.
• Mutation: bit-flipping, Gaussian mutation, Cauchy mutation, random genera-
tion of subtree, etc.
A selection scheme determines the probability of an individual being selected to
produce offspring by recombination and/or mutation. There are three major types
of selection schemes, roulette wheel selection (also known as the fitness proportional
selection), rank-based selection (linear and nonlinear) and tournament selection.
Sometimes elitist selection is used. Further details on search operators and selection
can be found in Ref. 6.
March 2, 2004 9:40 WSPC/157-IJCIA 00105

Evolutionary Optimization (EvOpt): A Brief Review and Analysis 315

Now let us discuss the different evolutionary algorithms. Evolution strategies


(ES) were first proposed by Rechenberg and Schwefel in 1965 as a numerical op-
timization technique. The original evolution strategy did not use populations. A
population was introduced into evolution strategies later.8,9 In ES, real-valued vec-
tors are used to represent individuals rather than binary strings. ES usually use
a deterministic selection scheme, Gaussian mutation, and discrete or intermediate
recombination. The term crossover is seldom used in the context of ES because ES
do not simulate evolution at the genetic level. Further details on ES can be found
in Refs. 8 and 9.
Evolutionary programming (EP) was first proposed by Fogel and others in the
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

mid-1960s as one way to achieve artificial intelligence.10 Several examples of evolv-


ing finite state machines were demonstrated. Since late 1980s, evolutionary pro-
gramming was also applied to various combinatorial and numerical optimization
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

problems. EP is very similar to ES in terms of algorithm. It uses real numbers


as individuals, Gaussian mutation and self-adaptation. The most noticeable differ-
ences between EP and ES are recombination and selection. EP does not use any
recombination or crossover, but uses a probabilistic competition (i.e. a kind of tour-
nament selection) as the selection mechanism. Further details on EP can be found
in Ref. 11. We like to mention here that there is no reason why EP cannot have
recombination and why ES cannot have a probabilistic selection scheme from the
algorithmic point of view.
The current framework of genetic algorithms (GAs) was first proposed by
Holland12 and his student (Jong49 ) in 1975 although some of the ideas appeared
as early as 1957 in the context of simulating genetic systems.13 Genetic algorithms
were first proposed as adaptive search algorithms, although they have mostly been
used as a global optimization algorithm for either combinatorial or numerical prob-
lems. They are probably the most well-known branch of evolutionary computation.
GAs are quite different from ES and EP in terms of individual representation and
search operators. GAs emphasize genetic encoding of potential solutions into chro-
mosomes and apply genetic operators to these chromosomes. This is equivalent to
transforming the original problem from one space to another space. The genetic
representation is crucial to the success of GAs. Although a simple GA uses binary
representation, one point crossover, bit-flipping mutation and roulette-wheel selec-
tion, there appeared many variations in the literature. In GAs, the re-combination
(crossover) plays a major role and the mutation is only used as a background op-
erator. Further details on GAs can be found in Ref. 7.
A special sub-branch of genetic algorithms is Genetic Programming (GP). GP
can be regarded as an application of genetic algorithms to evolve tree-structured
chromosomes.14 Historically, those trees represent LISP programs. In GP, both
crossover and mutation are used. Since GP is not usually used in mathematical
optimization, we are not interested in GP.
In recent years, a general term of evolutionary algorithms has been used by more
and more researchers to include all three major algorithms, i.e. evolution strategies,
March 2, 2004 9:40 WSPC/157-IJCIA 00105

316 R. Sarker, J. Kamruzzaman & C. Newton

evolutionary programming and genetic algorithms, since they use almost the same
computational framework. This is the view taken by this paper.

4. Why Use EC in Optimization?


EC has been recognized as a great tool for solving optimization problems among
Computer Science (CS) and Information Technology (IT) researchers over the last
two decades. However, OR/Optimization people find it difficult to accept EC as a
suitable tool for optimization mainly due to lack of its strong mathematical basis
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

for convergence and optimality. In this section, the suitability of using EC for op-
timization will be explored indicating the advantages and disadvantages from the
optimization point of view.
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

EC has some advantages over the conventional mathematical programming tech-


niques. As our brief discussions on advantages below, one can easily imagine why
EC should be used in solving optimization problems.

Properties of Functions
Consideration of convexity/concavity and continuity of functions are not neces-
sary in EC, however, these are a real concern in most mathematical programming
techniques. In EC, the initial population (which is a set of solutions) is generated
randomly and then the subsequent generations (a new set of solutions) are usually
produced numerically (using simple rules) from their previous populations. It does
not matter whether the function is differentiable or not. Although both the domains
share the same concept of generate-and-test algorithm, the EC search procedure is
not directly comparable (no clear similarity) to the conventional search, even with-
out using derivatives, such as the cyclic coordinate method, the method of Hooke
and Jeeves, and Rosenbrock’s method (for these methods see Ref. 15). However,
there is no restriction on incorporating any conventional search procedure into EC
(say hybrid EC) to make the algorithm efficient.

Single Best Solution


The mathematical programming techniques provide a single best solution that is
not the interest of many decision makers. For example, in the bidding problem,
the decision maker is interested to see other solution alternatives such as the sec-
ond best, third best and so on. This is due to the fact that the solution of the
optimization problem is usually used as one of many bid evaluation criteria.16 The
second best, third best and so on solutions can be obtained easily by using EC. The
solutions can also be manipulated to fit the decision maker’s desire and qualitative
goals.

Infeasibility
The mathematical programming techniques cannot, apparently, help in decision
making when the problem is infeasible. However, the incorporation of penalty
functions allows the identification of the infeasible constraints and hence helps to
March 2, 2004 9:40 WSPC/157-IJCIA 00105

Evolutionary Optimization (EvOpt): A Brief Review and Analysis 317

remove infeasibility by revising the constraints. EC is helpful in making the prob-


lem feasible by suggesting minimum changes in problem structure. It is common
practice in EC to use penalty function based methods for all types of constrained op-
timization problems including linear programming,17 although other methods such
as repairing infeasible solutions,18 and rejecting infeasible solutions19 are sometimes
used.

Domain Knowledge
It is not difficult to implement EAs, because they do not require any rich domain
knowledge. However, domain knowledge can be incorporated into EC techniques. 6
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

For example, the concept of conventional search techniques can replace the search
component in EAs. In addition, the final EAs solutions can be refined using lo-
cal search techniques. The generation of the initial population and the selection
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

procedure can be redesigned using the conventional optimization concepts.

Robustness of EC Algorithms
Theoretically, one common structure of evolutionary algorithms can be imple-
mented for many single objective constrained mathematical programming models.
Any single penalty function-based evolutionary algorithm can be used to solve many
linear, integer and different classes of nonlinear programming models.17,20 However,
the algorithm may require different algorithmic parameters to solve different mod-
els. On the other hand, we need to learn a number of different and complex tech-
niques to solve these models using conventional optimization theory. For example,
the simplex method and interior point method for linear programs,21,22 and the
branch and bound method and cutting plane method for integer programs.23 There
are many different methods available for different classes of nonlinear programs. 24

Constrained Handling and Penalty Methods


The implementation of the penalty function method in EC is somewhat efficient. In
conventional optimization, we solve the modified problem (unconstrained version
with penalty) repeatedly changing the penalty parameters.15 In EC, we solve the
modified problem only once and the penalty parameters are changed, using different
rules, as the generation progresses.17,20

Exploration and Exploitation


EAs are a class of general purpose (domain independent) search methods which
strike a remarkable balance between exploration and exploitation of the search space
(see p. 15, Ref. 7). This property helps to improve the solution by skipping from
the local optima, and is particularly useful when solving multi-modal problems.

Computational Time
The most favourable point of EAs is that they provide quick approximate solu-
tions. In most cases, EAs make a significant improvement within the first few
March 2, 2004 9:40 WSPC/157-IJCIA 00105

318 R. Sarker, J. Kamruzzaman & C. Newton

generations.25 For many hard problems, EC based techniques provide near-optimal


solutions within reasonable computational time.26

Multiobjective Optimization
EAs are more suitable for multiobjective optimization because of their capability
of simultaneous optimization of conflicting objective functions and generation of a
number of alternative solutions in a single run.27 – 29 Some multi-objective problems
can be solved, using conventional optimization techniques, developing a composite
objective where the linear combination of weights (assigned to the individual ob-
jective functions) provides the alternative solutions for the Pareto-frontier. 30 These
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

problems usually belong to a class of problems which generate continuous and


monotonically increasing and decreasing Pareto frontiers. However, if the Pareto-
frontier is non-continuous (e.g. a set of discretely spaced continuous sub-regions) or
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

non-uniform (higher density of solutions in one region than others), or the functions
are complex, periodic or multimodal, then EAs are the better option.31

Starting Solution
Unlike conventional search techniques, EC methods do not require any specialized
methodology to generate the initial solutions (for example, North-West Corner
Rule for the Transportation Problem). The initial population is usually generated
randomly in EC.

Harder Problems
In conventional optimization domains, integer programming is more complex than
linear programming, and nonlinear programming is even harder. In EC domains,
hard problems are not that hard, compared to easy problems. When solving op-
timization problems using EC, the integer requirements, combinatorial nature,
nonlinearity and other properties do not add much complexity (in general terms)
compared to the linear programming problem. However, solving LP by using EC
is not appropriate because of the power of existing conventional optimization algo-
rithms. In case of multi-modal problems, there are less chances of being trapped in
the local optima32 when using EC.

Optimization Under Changing and Dynamic Environments


For many real-world optimization problems, the environment fluctuates, leading to
dramatic changes in the fitness (/objective function values) of individual solutions.
Examples of such problems include: on-line data mining — where the contents of the
data-warehouse may change over time; job scheduling — where the problem is con-
tinually changing as jobs are completed and new jobs are added; investment portfo-
lio evaluation — where the assessment of risks vary over time; and Robot’s path de-
termination — where the new path must be determined for the next move.33 On-line
Taxi-cab assignment problems34 and Pizza delivery assignment problems are also
March 2, 2004 9:40 WSPC/157-IJCIA 00105

Evolutionary Optimization (EvOpt): A Brief Review and Analysis 319

considered as problems under changing environments where the optimal solutions


are changing with time. Optimization under changing environments (/dynamic or
non-stationary or online) can be handled nicely by EC techniques.35 – 37 These prob-
lems can be handled to some extent considering a number of static instances.

5. When Should We Use Evolutionary Algorithms?


The power of existing linear programming algorithms would strongly discourage any
use of EC based methodologies for solving linear programming problems. However,
EC based methods could be beneficial for many real-world complex problems. We
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

have identified the following cases where EC can contribute significantly better to
the field of optimization and operations research.
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

Knowledge of Optimization: As discussed earlier, the researchers/practitioners with


poor or no mathematical knowledge on optimization problems can still solve
the problems using EC based methodology.
Ranked Solutions: When the decision makers requires second best solution, third
best solution and so on, the EC based methodology could be a better option
since they can produced all these solutions in a single run.
Multi-Modal Problems: When the problem is multi-modal with many peaks, the
EC based methodology has a lower probability of being trapped in local
optima.
Quick Approximate Solutions: When we need quick approximate solutions to large-
scale difficult problems, EC based methodology can do a better job in many
situations.
Multi-Objective Optimization: Multi-objective problems, where simultaneous opti-
mization of multiple conflicting objectives is required, is a potential application
area where EC based methodology can do better.
Optimization Under Changing Environments: Optimization under a changing en-
vironment is another potential application/research area for EC methodology.
Hybrid Algorithms: When designing hybrid algorithms for solving complex prob-
lems, many features from conventional techniques or other modern techniques
can be incorporated into EC to design more efficient algorithms.
Computationally Cheaper: EC based methods would be a preferred option, if they
are computationally cheaper in solving any class of problems.
Computational Time: If the computational time is not a concern, the EC based
methods can be used to provide near optimal solutions.
Highly Complex Problem: Problems involving many complex features like multi-
objectivity, multimodality, changing environment, etc. would be suitable for
EC techniques.
March 2, 2004 9:40 WSPC/157-IJCIA 00105

320 R. Sarker, J. Kamruzzaman & C. Newton

6. Disadvantages of Using EC
There are several drawbacks to using EC based methodology for solving optimiza-
tion problems. They are briefly discussed below.

Heuristics: EC based techniques are recognized as heuristic search algorithms. There


is no guarantee of optimality when solving optimization problems using such
techniques.17,20
Parameters of EC: The parameters used in EC based techniques are problem ori-
ented. It is not an easy task to determine the appropriate parameters for a
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

given problem.38
Convergence of EAs: Convergence pattern and time complexity, while using EC
based techniques for solving optimization problems, are case oriented too. Al-
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

though some bounds are derived on the convergence rate of GAs, for specific
cases, these cannot be generalized for all the problems and the GAs applied to
these problems.39 – 41
Mathematical Insight: EAs do not help to examine the mathematical insight of the
optimization problems to be solved which could provide additional information
to the decision makers.
Sensitivity Analysis: Sensitivity analysis may be performed, for all type of
models, on the desired range though it would not be as efficient as LP
sensitivity.

7. EAs for Solving Optimization Problems


In the Operations Research (/Optimization) domain, we judge the performance of
algorithms, when solving optimization problems, based on (at least) two basic cri-
teria: (i) the quality of solutions and (ii) the computational time (/complexity). As
per the literature, although there appeared a huge number of applications of EAs
to the optimization problems, almost all researchers attempted to show (i) how the
problem can be solved using EAs and (ii) their algorithms’ superiority (compared
to similar classes of algorithms) based on the quality of solutions only.28,42 In some
cases, they compared the average number of fitness evaluated. It is hard to find
papers where the authors compared their algorithms with the existing conventional
optimization techniques based on both the criteria discussed above. This could be
either due to simple ignorance or because of the fact that they are not computa-
tionally superior compared to other existing techniques. Whatever the reason —
this does not help to motivate OR researchers to use EAs for solving optimization
problems.
Now the question arises: is there any situation where EAs provide good solu-
tions and are also computationally superior? As we pointed out earlier, solving
LP using EAs would not be the right choice. Koksalan and Keha43 solved a num-
ber of single-machine bicriteria scheduling problems using GAs. They showed that
March 2, 2004 9:40 WSPC/157-IJCIA 00105

Evolutionary Optimization (EvOpt): A Brief Review and Analysis 321

GA outperformed Simulated Annealing (SA) for all the test problems in terms of
quality of solutions. However, SA requires much shorter computational time than
GA. Varela et al.25 used GAs for solving job shop scheduling problems with bot-
tlenecks. For a given problem, they have compared the solutions produced by three
different generations (Table 7, Ref. 25). The percentage deviations of the fitness
from the known best solution are 0.8330%, 0.0969% and 0.01259% for the num-
ber of generations 50, 200 and 1,000 respectively. The first 50 generations provide
a very good approximate solution (within 1%) which is really a favourable point
for using GAs for quick approximate solutions. It is well-known in EAs that the
most improvement is made within the first few generations and it is very slow after
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

50 generations.
Chu and Beasley44 developed a genetic algorithm for solving the generalized
assignment problem, a well-known NP-complete combinatorial optimization prob-
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

lem. They mainly investigated the quality of solutions produced rather than the
computational expense. Although GA is superior in terms of solution quality, the
computational times for some heuristics may be significantly shorter than those
given for GA. Beasley and Chu45 claimed similar results with GA for a set covering
problem which has no known polynomial time algorithms.
The integer programming problems are known as hard problems in the Oper-
ations Research/Optimization literature. Sakawa and Kato26 proposed a GA for
solving such problems and compare the performance with the branch-and-bound
(B&B) technique. In one set of experimental studies, they considered five instances
of a multidimensional 0–1 knapsack problem with three different degrees of strict-
ness of the constraints. The sizes of the problem instances are: (i) 30 variables
and 10 constraints, (ii) 50 variables and 20 constraints, (iii) 100 variables and 30
constraints, (iv) 150 variables and 40 constraints and (v) 200 variables and 50 con-
straints. We compare the objective function values and the computational times, of
these problem instances, for a degree of strictness of the constraints equal to 0.75.
The details of parameters used for GA, LP package, computer platform and other
information can be found in Ref. 26. As shown in the table, GAs are providing
optimal solutions for smaller problems (problem instances 1 and 2) and the quality
of solution is deteriorating (slowly) as the size of the problem instance increases.
However, the computational time, for GA compared to B&B, is more and more
appreciable as long as the size of the problem instance increases. This is a very

Table 1. Comparing GA and Branch & Bound technique for integer programming.

Problem Objective Values Computational Time (sec)


Instance Size (V × C) Optimal Best GA B&B GA Objective Deviation
1 30 × 10 −12051 −12051 9.00 × 10−2 6.24 0%
2 50 × 20 −21931 −21931 8.91 × 10−1 1.47 × 101 0%
3 100 × 30 −46198 −46097 1.13 × 102 4.77 × 101 0.22%
4 150 × 40 −66543 −65989 4.44 × 103 9.30 × 101 0.83%
5 200 × 50 −86757 −85169 1.10 × 104 3.11 × 102 1.83%
March 2, 2004 9:40 WSPC/157-IJCIA 00105

322 R. Sarker, J. Kamruzzaman & C. Newton

favourable point for using EAs in solving hard optimization problems. However,
further experimentation is required to generalize the findings.
As we can see in Table 1, the computational times required by GAs for the
problem instances 1, 2, 3, 4 and 5 are 69.33, 16.50, 0.422, 0.021 and 0.028 times,
respectively, of the computational time required by the B&B technique.

7.1. A two stage transportation problem


Consider a two stage transportation problem where n sources and m inter-
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

mediate/exchange points and k destinations exist. The sources have limited supply,
the exchange points have limited capacity to handle the supplied goods and the
destinations have known demand. All the supplies from the sources are transported
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

to the destinations via the exchange points. The problem is to minimize the overall
transportation cost while satisfying supply, capacity, demand and flow constraints.
This problem can be formulated as a simple linear program (LP) as follows.

Notations:

xij = The quantity transported from source i to exchange point j.


xjk = The quantity transported from exchange point j to destination k.
Cij = Unit transportation cost from source i to exchange point j.
Cjk = Unit transportation cost from exchange point j to destination k.
Si = Total supply from source i.
CPj = Capacity of exchange point j.
Dk = Demand of destination k.

Mathematical Model:
XX XX
Minimize Z = Cij xij + Cjk xjk
i j j k
X
xij ≤ Si
j
X
xij ≤ CPj
i
X
xjk ≥ Dk
j
X X
xij = xjk ≥ 0
i k
xij , xjk ≥ 0 .

The objective of the problem is to minimize the overall transportation costs. The
first, second and third constraints ensure the supply limitation, exchange point
March 2, 2004 9:40 WSPC/157-IJCIA 00105

Evolutionary Optimization (EvOpt): A Brief Review and Analysis 323

capacity and demand requirements respectively. The fourth or the flow constraint
indicates that the total supply in the second stage of the network must be less than
or equal to the supply in the first stage.
Although the quantity transported (decision variables of the model) requires inte-
ger values, the structure of the LP can ensure integer values for variables, provided
supply, capacity and demand data are integer, without having any integer variables
in the model. In this case study, we intend to solve this model using GA. From
the previous study, it is found that the quality of solutions, while solving by GAs,
deteriorates with the tightness of the constraints.17,20 We have chosen this problem
mainly for two reasons: (i) this problem generates many tighter constraints which
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

is not an easy test problem for GA application and (ii) the mathematical model of
this problem can be converted from a simple linear program to a difficult integer
programming test problem with a very minor modification.
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

7.2. GA application
First we consider a simple real-coded GA, where the variable values were generated
as real/integer numbers not as binary-string. Usually the binary coding is recog-
nized as the most suitable encoding for any problem solution because it maximizes
the number of schemata being searched implicitly,5,12 but there have been many ex-
amples in the evolutionary computation literature where alternative representations
have resulted in algorithms with greater efficiency and optimization effectiveness
when applied to identical problems (see e.g. articles by Back and Schwefel46 and
Fogel and Stayton47 among others). Davis48 and Michalewicz7 comment that for
many applications real-values or other representations may be chosen to advantage
over binary coding. There does not appear to be any general benefit in maximiz-
ing implicit parallelism in evolutionary algorithms, and, therefore, forcing problems
to fit into a binary representation may not be recommended. As experimented by
Sarker et al.17,20 real coded GAs perform better than Binary coded GAs for integer
programming problems.
The following characteristics were considered in running the Simple GA (SGA):

Algorithm 1: Simple GA

• Population size 50.


• Initial population was generated randomly within the given range of supply, ca-
pacity and requirements.
• Heuristic crossover and non-uniform mutations.

The operator generates a single offspring x3 from two parents x1 and x2 according
to the following rule:
x3 = rounded[r · (x2 − x1 ) + x2 , ]
March 2, 2004 9:40 WSPC/157-IJCIA 00105

324 R. Sarker, J. Kamruzzaman & C. Newton

where r is a random number between 0 and 1, and the parent x2 is no worse than
x1 . Actually, when the generated value x3 is out of bounds (upper and lower
bounds) it is set equal to the corresponding violated bound. Also, note that x1
and x2 are “parent variables” not parents. Also x3 is rounded after crossing to
integer.
• Probability of crossover = 1.0, and Probability of mutation = up to 0.10.
• Penalty methods for constraints handling: dynamic.
• Selection for crossover: First rank all the individuals based on the objective func-
tion values. Then select randomly one from the top 10 and two from the rest.
Crossover was made between the first one and better of other two.
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

The outputs of the algorithm were not satisfactory even for small problems with 3
sources, 2 exchange points and 3 destinations. We run the algorithm 30 times, each
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

time for 300 generations. Most of the time, the algorithm produces too many infea-
sible solutions (sometimes all) and ended up either with infeasible or suboptimal
(very poor feasible) solutions. To overcome this problem, we modified the algorithm
to force the feasibility of offspring in each and every generation. We introduce the
modified algorithm as Modified GA (MGA).

Algorithm 2: Modified GA
• Population size 50.
• Initial population was generated randomly ensuring their feasibility.
• Heuristic crossover (as Algorithm 1) and no mutation.
• Probability of crossover = 1.0.
• Constraints handling: Ensure feasibility with repairing mechanism.
• Selection for crossover: First rank all the individuals based on the objective func-
tion values. Then select randomly one from the top 10 and two from the rest.
Crossover was made between the first one and best of the other two.
• Crossover: Once the parents are selected for crossover, choose a variable randomly
(for example, a path or supply amount from a source to an exchange point)
and perform crossover. If the variable is within the current range (supply and
capacity), we accept and reset the available supply and capacity for other vari-
ables. Otherwise, we set them at their extreme values. Once the supply is ex-
hausted from a source, all unassigned paths from that source will be assigned to
zero shipments without performing crossover operation. If any positive amount
is left in any source after assigning all the paths (originating from that source)
except one, then the entire amount will be allocated to that path without per-
forming any crossover if it is less than the remaining exchange point capacity.
Otherwise the minimum of available supply and exchange point capacity will be
allocated.
• Ensuring Feasibility: After performing crossover following the above procedure,
the problem could be infeasible in some cases because of the allocation without
March 2, 2004 9:40 WSPC/157-IJCIA 00105

Evolutionary Optimization (EvOpt): A Brief Review and Analysis 325

crossover as mentioned above and zero allocation after crossover (if the allocation
is less than or equal to zero after crossover). These sorts of allocations may violate
the supply constraints. Similar violation may occur in the destination side. In
such a case, we apply a repair mechanism to ensure the feasibility in the final
allocation.
• Repair Mechanism: Identify the sources that have supplied less than their avail-
able amounts and determine the unassigned amounts that are available for supply.
Generate a list of exchange points which can receive more supplies. Randomly
select an exchange point from that list and select a source with unassigned goods,
and then assign to the exchange point as much as possible. Update the list of
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

exchange points and sources, and their unassigned supply/capacity. Continue


this process until all supplies are exhausted. The same procedure will be applied
to the second stage (exchange point — destination) of the problem to remove
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

infeasibility.

7.3. Computational experience


Three instances of a small problem (with 3 sources, 2 transshipment points and
3 destinations) were solved and the performance compared of the above two algo-
rithms (SGA and MGA) in term of quality of solutions. The algorithms were run
for 30 times and were allowed to simulate 300 generations in each run. The results
are summarized in Table 2.
From the above table, it is clear that MGA performs better for a two-stage trans-
portation problem. All the constraints in the test problems are tight constraints.
As a result, it is very hard for SGA to find a reasonable number of feasible off-
spring in any generation which contributes to the generation of infeasible offspring
in the subsequent generation. Although we allow infeasible offspring to be parents
in many situations for better convergence, it does not seem to be effective when
dealing with too many tight constraints.17,20 On the other hand, MGA ensures
feasibility in every generation.
We then experimented with different sizes of test problems using the MGA
algorithm. The test problems were generated by varying the number of sources, ex-
change points and destinations as shown in Table 3, and the capacity and demand

Table 2. Comparing Simple GA with a modified GA.

Number of Runs that Found the Best


Best Objective Function Value Found Objective Value (Out of 30 runs)
Problem Algorithm 1 Algorithm 2 % Algorithm 1 Algorithm 2 %
Instance Simple GA Modified GA Difference Simple GA Modified GA Difference
1 6450 6450 0 16 30 46.67
2 11900 11700 1.71 3 23 86.96
3 38200 38200 0 5 21 76.19
March 2, 2004 9:40 WSPC/157-IJCIA 00105

326 R. Sarker, J. Kamruzzaman & C. Newton

Table 3. The test problem details.

Problem Number of Number of Number of Number of Number of


Number Sources Exchange Points Destinations Variables Constraints
1 3 2 3 12 10
2 4 2 4 16 12
3 5 3 5 30 16
4 8 4 8 64 24
5 10 5 10 100 30
6 12 7 12 168 38
7 15 10 15 300 50
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

data were generated arbitrarily random fulfilling the assumptions (i) the total sup-
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

ply equals total demand (balanced problem) and (ii) the total capacity of the ex-
change points is greater than the total supply. The unbalanced problems can be
solved using the same GA either (i) by making the problem balanced with a dummy
source or destination as required or (ii) by allowing a minor change in the algorithm.
The number of variables and the number of constraints involved in the mathemat-
ical models of the test problems are also shown in Table 3.
We have solved the corresponding linear programming model using a commercial
optimization package LINDO/LINGO. The LP objective function value and the
outputs from GA are reported in Table 4. We have experimented on a Pentium4,
1.8 GHz, 256 MB RAM, IBM PC. The GA “Overall best” column indicates the best
objective function obtained from 30 individual runs. The 30 runs average means
the average of the best objective function values in 30 runs.
From columns 5 and 6 in Table 4, it is clear that MGA is providing optimal
solutions for smaller problems (problem # 1 and 2) but the quality of solution is
deteriorating with the increase of search space (larger problems). The solutions of
the larger problems are not acceptable due to high percentage deviation from the
optimal solution. It is observed that the quality of solutions is highly dependent
on the quality of initial the population. It is more likely to generate a better ini-
tial population for smaller search space (smaller problems) when we use a fixed
population size for all problems.

Table 4. Comparing GA with LP & ILP techniques.

Prob. LP Objective GA GA % Deviation % Deviation (GA


No. (LINDO) Overall best 30 runs average (GA best with LP) Average with LP)
1 11700 11700 11700 0 0
2 67900 67900 67900 0 0
3 62500 63000 63913 0.8 2.26
4 147200 147200 151700 0 3.06
5 141700 161800 176443 14.18 24.52
6 86750 112600 125337 29.80 44.48
7 91300 125700 141280 37.68 54.74
March 2, 2004 9:40 WSPC/157-IJCIA 00105

Evolutionary Optimization (EvOpt): A Brief Review and Analysis 327

Table 5. Comparing M2 GA with LP & ILP techniques.

LP GA GA % Differ. % Differ. GA
Prob. Objective Overall 30 runs (GA best (GA Average 30 runs Average
No. (LINDO) Best Average with LP) with LP) (Time in seconds)
1 11700 11700 11700 0 0 0.019
2 67900 67900 67900 0 0 0.047
3 62500 62500 62500 0 0 5.82
4 147200 147200 147575 0 0.26 8.39
5 141700 142800 148555 0.7763 4.84 17.26
6 86750 87700 92505 1.0951 6.63 69.77
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

7 91300 94600 101059 3.6145 10.69 325.63


by FUDAN UNIVERSITY on 05/04/15. For personal use only.

In order to improve the quality of solutions, we have conducted further experi-


ments by changing the method of initial generation and by varying the population
size. The initial population is generated as follows.

(i) Set a list of sources with positive supply and destinations with unfulfilled
demand.
(ii) Select a source or destination at random from the list, and find the cheapest
route linked to transhipment points from that source/destination. Any tie will
be broken arbitrarily. Assign the maximum possible amount to that route.
Update the involved supply/demand and available capacity of transhipment
point. If the supply is exhausted (/demand is fulfilled), then the source
(/destination) would be removed from the list.
(iii) Repeat (ii) until all supplies are exhausted and requirements are fulfilled.

We recognize the MGA with the above change as M2 GA. We run M2 GA by


varying the population size from 50 to 2,000 as higher population size is required
for better coverage of the search space in case of larger problems. As shown in
Table 5, the results of M2 GA are much improved although the quality of solutions
is deteriorating with the increase of search space (larger problems).
The time taken by the LP approach is about one second for problems 6 and 7. The
time for problems 1 to 5 could not be collected as the package provides computa-
tional time in an increment of one second. However it is clear that LINDO/LINGO
would take significantly lower computational time to solve LPs and produce global
optimal solutions. In other words, the conventional LP techniques are simply too
efficient compared to the other modern techniques.
We also generated a number of integer programming (IP) test problems by adding
fraction (0.3 to 0.8) to some of the supply constraints right hand sides (supply
amount). Although this modification is not practically meaningful and would not
change anything in the numerical results while solving using LINDO/LINGO, it
would allow us to compare the conventional integer programming (IP) techniques
with GA. The time required to solve such IP test problems varies with the amount
(/fraction) and place of fractions to be added. We have added some fractions
March 2, 2004 9:40 WSPC/157-IJCIA 00105

328 R. Sarker, J. Kamruzzaman & C. Newton

randomly in different places for the above seven test problems. The times required
in seconds, for the first six test problems are 1, 6, 21, 29, 3,302 and 1,088 respec-
tively. The last problem was interrupted intentionally since no feasible solution was
found within 72 hours. As we can see, the computational time for GA is much lower
compared to Conventional IP techniques for the test problems considered in this
paper. This investigation supports the fact that GAs would do better than conven-
tional optimization techniques for integer programming problems if we sacrifice the
quality of solutions.
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

8. Conclusions
In this paper, we introduce evolutionary computation and analyze the similarities
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

and differences with conventional search algorithms. The reasons for using EC in
solving optimization problems and the situations where EC could be beneficial are
briefly discussed. A brief review on different EAs and their components is provided.
The use of EAs by solving a case problem has been demonstrated.
In this paper, we developed a GA (known as Modified GA or MGA) for a
two-stage transportation problem. The problem is a standard linear programming
problem. The developed GA provides better solutions than simple GA. The exper-
imental results show that the time taken by MGA is somewhat polynomial. How-
ever the quality of solutions deteriorates with the increasing size of the problems.
The LP problems were then modified to generate IP test problems. Although the
LINDO/LINGO solves the IP test problems optimally, it takes much longer compu-
tational time than MGA which is an indication that GAs could be an appropriate
methodology for integer programming problems.

References
1. M. Fischer and Y. Leung, GeoComputational Modelling Techniques and Applications
(Springer-Verlag, Berlin, 2001).
2. W. Barnett, C. Chiarella, S. Keen, R. Marks and H. Schnabl, Complexity and
Evolution (Cambridge University Press, 2000).
3. X. Yao, Evolutionary Computation: Theory and Application (World Scientific Publ.
Co., Singapore, 1999) (edited).
4. M. Patel, V. Honavar and K. Balakrishnan, Advances in the Evolutionary Synthesis
of Intelligent Agents (The MIT Press, Cambridge, MA, 2001).
5. D. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning
(Addison-Wesley, MA, 1989).
6. X. Yao, Evolutionary computation: A gentle introduction, Evolutionary Optimization,
eds. R. Sarker, M. Mohammadian and X. Yao (Kluwer, USA, 2002) 27–56.
7. Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, 3rd
edn. (Springer-Verlag, New York, 1996).
8. H.-P. Schwefel, Numerical Optimization of Computer Models (Wiley, Chichester,
1981).
9. H.-P. Schwefel, Evolution and Optimum Seeking (Wiley, New York, 1995).
March 2, 2004 9:40 WSPC/157-IJCIA 00105

Evolutionary Optimization (EvOpt): A Brief Review and Analysis 329

10. L. J. Fogel, A. J. Owens and M. J. Walsh, Artificial Intelligence Through Simulated


Evolution (John Wiley & Sons, New York, 1966).
11. D. B. Fogel, Evolutionary Computation: Towards a New Philosophy of Machine
Intelligence (IEEE Press, New York, 1995).
12. J. Holland, Adaptation in Natural and Artificial Systems (University of Michigan
Press, Ann Arbor, MI, 1975).
13. A. S. Fraser, Simulation of genetic systems by automatic digital computers. I. Intro-
duction, Austral. J. Bio. Sci. 10 (1957) 484–491.
14. J. R. Koza, Genetic Programming (The MIT Press, Cambridge, 1992).
15. M. Bazaraa and C. Shetty, Nonlinear Programming: Theory and Algorithm (John
Wiley & Sons, New York, 1979).
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

16. J. Seydel and D. L. Olson, Bids considering multiple criteria, J. Constr. Eng. Man-
agement 116, 4 (1990) 609–623.
17. R. Sarker, T. Runarsson and C. Newton, Genetic algorithms for solving a class of
constrained nonlinear integer programs, Int. Trans. Operat. Res., Blackwell’s, 8, 1
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

(2001) 61–74.
18. D. Whitley, V. S. Gordon and K. Mathias, Lamarckian evolution, the Bladwin effect
and function optimization, Proc. 3rd Conf. Parallel Problem Solving Nature (1996),
19. Z. Michalewicz and M. Schmidt, Evolutionary algorithms and constrained optimiza-
tion, Chap. 3 Evolutionary Optimization, eds. R. Sarker, M. Mohammadian and
X. Yao (Kluwer, USA, 2001) 57–86.
20. R. Sarker, T. Runarsson and C. Newton, A constrained multiple raw materials man-
ufacturing batch sizing problem, Int. Trans. Operat. Res. 8, 2 (2001) 121–138.
21. F. Hillier and G. Lieberman, Introduction to Operations Research (McGraw-Hill,
Boston, 2001).
22. G. Sierksma, Linear and Integer Programming: Theory and Practice (Marcel Dekker,
Inc., New York, 1996).
23. R. Martin, Large Scale Linear and Integer Optimization (Kluwer Academic Publish-
ers, Boston, 1999).
24. H. Eiselt, G. Pederzoli and C. Sandblom, Continuous Optimization Models
(de Gruyter, Berlin, Germany, 1987).
25. R. Varela, C. Vela, J. Puente and A. Gomez, A knowledge-based evolutionary strategy
for scheduling problems with bottlenecks, Eur. J. Operat. Res. 145 (2003) 57–71.
26. M. Sakawa and K. Kato, Genetic algorithms with double strings for 0-1 programming
problems, Eur. J. Operat. Res. 144 (2003) 581–597.
27. R. Sarker, K. Liang and C. Newton, A new evolutionary algorithm for multiobjective
optimization, Eur. J. Operat. Res. 140 (2002) 12–23.
28. E. Zitzler, K. Deb and L. Thiele, Comparison of multiobjective evolutionary
algorithms: Empirical results, Evo. Comput. 8, 2 (2000) 173–195.
29. E. Zitzler and L. Thiele, Multiobjective evolutionary algorithms: A com-
parative case study and the strength Pareto approach, IEEE Trans. Evol.
Comput. 3, 4 (1999) 257–271.
30. K. Miettinen, Nonlinear Multiobjective Optimization (Kluwer’s International Series
in OR/MS, 1999).
31. K. Deb, Multi-objective genetic algorithms: Problem difficulties and construction of
test problems, Evol. Comput. 7, 3 (1999) 205–230.
32. K.-H. Liang, Evolutionary Optimization with Self-Adaptation and Landscape Approx-
imation, Ph.D Thesis, School of Computer Science, UNSW@ADFA, Canberra, Aus-
tralia, 2000.
33. P. Wide and H. Schellwat, Implementation of a genetic algorithm for routing an
autonomous robot, Robotica 15, 2 (1997) 207–211.
March 2, 2004 9:40 WSPC/157-IJCIA 00105

330 R. Sarker, J. Kamruzzaman & C. Newton

34. A. P. Kosoresow and M. P. Johnson, Finding worst-case instances of, and lower bounds
for, online algorithms using genetic algorithms, AI 2002 : Advances in Artificial In-
telligence, eds. B. McKay and J. Slaney, Lecture Nutes in Artificiae Intelligence 2557
(Springer, Berlin, 2002) 344–355.
35. K. De Jong, Evolving in a changing world, Z. Ras and A. Skoworn, Found. Intell.
Syst., Lecture Notes in Artificial Intelligence 1609, Springer (1999) 513–519.
36. J. Grefenstette, Evolvability in dynamic fitness landscape: A genetic algorithm
approach, IEEE Congr. Evol. Comput. (1999) 2031–2038.
37. R. Morrison and K. De Jong, A test problem generator for non-stationary environ-
ments, IEEE Congr. Evol. Comput. (1999) 2047–2053.
38. A. Eiben, R. Hinterding and Z. Michalewicz, Parameter control in evolutionary
Int. J. Comp. Intel. Appl. 2003.03:311-330. Downloaded from www.worldscientific.com

algorithms, IEEE Trans. Evolut. Comput. 3, 2 (1999) 124–141.


39. J. He and L. Kang, On the convergence rates of genetic algorithms, Theoret. Compu.
Sci. 229 (1999) 23–39.
40. Y. Takahashi, Convergence of simple genetic algorithms for the two-bit problem,
by FUDAN UNIVERSITY on 05/04/15. For personal use only.

BioSystems 46 (1998) 235–282.


41. T. Hanne, On the convergence of multi-objective evolutionary algorithms, Eur. J.
Operat. Res. 117, 3 (1999) 553–564.
42. S. Thangiah and K. Nygard, MICAH: A genetic algorithm system for multi-
commodity transshipment problems, Proc. 8th Conf. Artif. Intell. Appl. (1992)
240–246.
43. M. Koksalan and A. B. Keha, Using genetic algorithms for single-machine bicriteria
scheduling problems, Eur. J. Operat. Res. 145 (2003) 543–556.
44. P. C. Chu and J. E. Beasley, A genetic algorithm for the generalized assignment
problem, Comput. Operat. Res. 24, 1 (1997) 17–23.
45. J. E. Beasley and P. C. Chu, A genetic algorithm for set covering problem, Eur. J.
Operat. Res. 94 (1996) 392–404.
46. T. Back and H.-P. Schwefel, An overview of evolutionary algorithms for parameter
optimization, Evol. Comput. 1 (1993) 1–24.
47. D. Fogel and L. Stayton, On the effectiveness of crossover in simulated evolutionary
optimization, BioSystems 32 (1994) 171–182.
48. L. Davis, Handbook of Genetic Algorithms (Van Nostrand Reinhold, New York, 1991).
49. K. A. D. Jong, An Analysis of the Behaviour of a Class of Genetic Adaptive Systems,
PhD Thesis, University of Michigan, An Arbor, 1975.
50. Y. Davidor, H.-P. Schwefel and R. Manner (eds.), Lecture Notes in Computer Science
866 (Springer, Berlin) 6–15.

You might also like