0% found this document useful (0 votes)
85 views22 pages

Mean Gbest and Gravitational Search Algorithm

The document discusses a hybrid algorithm that combines Mean gbest Particle Swarm Optimization and Gravitation Search Algorithm. The hybrid approach aims to integrate the exploitation ability of Mean gbest PSO with the exploration ability of Gravitation Search Algorithm. Experimental results on benchmark functions and real-world problems show the hybrid approach outperforms other metaheuristics in terms of solution quality, stability, ability to find global and local optima, and convergence speed.

Uploaded by

mani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views22 pages

Mean Gbest and Gravitational Search Algorithm

The document discusses a hybrid algorithm that combines Mean gbest Particle Swarm Optimization and Gravitation Search Algorithm. The hybrid approach aims to integrate the exploitation ability of Mean gbest PSO with the exploration ability of Gravitation Search Algorithm. Experimental results on benchmark functions and real-world problems show the hybrid approach outperforms other metaheuristics in terms of solution quality, stability, ability to find global and local optima, and convergence speed.

Uploaded by

mani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

CHAPTER – 6

MEAN GBEST AND GRAVITATIONAL SEARCH


ALGORITHM

In this Chapter, another improved variant of Particle Swarm Optimization has been developed by
combining Mean gbest Particle Swarm Optimization and Gravitation Search Algorithm. The
basic inspiration is to integrate the ability of exploitation in Mean gbest Particle Swarm
Optimization with the ability of exploration in Gravitation Search Algorithm to synthesize both
approaches strength. As a result, the proposed approach has the automatic balance capability
between local and global searching abilities. The performance of the hybrid approach is tested
for a variety of classical functions i.e. unimodal, multimodal and fixed dimension multimodal
functions etc. Further Iris dataset and Economic Dispatch Problems are used to compare the
hybrid approach with several metaheuristics. Experimental statistical solutions prove empirically
that the other proposed approach outperforms significantly several metaheuristics in terms of
solution stability, solution quality, capability of local and global optimum and convergence
speed.

6.1 Introduction
Numerous natural and biological processes have been influencing the methodologies in
technology and science in a growing manner during the last few years. Population based
techniques become increasingly popular through the improvement and exploitation of intelligent
paradigms in advanced information systems design. Several numbers of most well-liked
population based nature inspired algorithms, when the task is optimization within complex
domains of data or information, are those techniques instead of successful micro-organism and
animal team behaviour, such as swarm or flocking intelligence (fish schools or birds flocks
inspired PSO (1995), artificial immune systems (that mimic the biological one [2002]),
optimized performance of ant colonies or bees (ants foraging behaviours gave rise to
ACO(2002,2006,2007)) etc. Several number of population based nature inspired tools have been
used to supply chain management problems and to solve very diverse operations, like vehicle
routing problems, organization of production and scheduling. All these algorithms are mainly
dependent on two characteristics like exploration and exploitation.

Exploitation is the convergence capability for the best solution near a good solution whereas
exploration is the capability of an approach to search whole parts of function space. The main
goal of all population based nature inspired approaches or a heuristic optimization technique is to
balance the capability of exploration and exploitation efficiently in order to search global
optimum. According to (1998), exploration and exploitation in evolutionary computing are not
clear due to lake of a generally accepted perception. Moreover, with strengthening one ability,
the other will weaken and vice versa.

In general, the existing nature inspired approaches are capable of solving several number of
functions. It has been proved that there is no technique, which can perform general enough to
solve all types of real life and non-linear problems. Hybridizing the optimization techniques is a
way to balance the overall exploitation and exploration capability. Particle Swarm Optimization
is one of the most commonly used evolutionary techniques in hybrid techniques due to its
simplicity, capability of searching global optimum and convergence speed. Further, there are
some studies in the literature which have been carried out to synthesize Particle Swarm
Optimization with other metaheuritics.

H. Liua et al. (2010) has developed a novel hybrid algorithm named PSO-DE, which integrates
Particle Swarm Optimization (PSO) with Differential Evolution (DE) to solve constrained
numerical and engineering optimization problems. Unlike SPSO, it has the capability to force
PSO jump out of stagnation because of its strong searching ability. The hybrid algorithm speeds
up the convergence and improves the algorithm's performance. On the basis of numerical results
obtained for benchmark test functions and engineering optimization functions, authors concluded
that the proposed approach is superior to the existing ones.

T. Niknam and B. Amiri (2010) proposed a hybrid evolutionary variant namely FAPSO-ACO-K
to find the solution of non-linear partitioning clustering problem. This variant was developed by
hybridizing three different evolutionary approaches viz. k-means, ant colony optimization and
fuzzy adaptive particle swarm optimization. The efficiency of proposed variant was tested on a
set of benchmark classical functions. It was concluded that proposed variant was better than
other existing variants for partitioning clustering problem. H.H. Nasab et al. (2013) proposed a
hybrid PSO (HPSO) to find a near optimal solution of dynamic facility layout problem (DFLP).
Authors have used a coding and decoding technique that permits one to one mapping of a
solution in discrete space of DFLP to a PSO particle position in continuous space. The developed
PSO has been hybridized with a simple and fast annealing technique for further improvement.
The algorithm has the capability to extend it for general cases. The results demonstrated the
efficiency of proposed algorithm over other variants. S. Mirjalili et al. (2010) has given a newly
hybrid population-based variant called PSOGSA by combining PSO and GSA. The main idea is
to integrate the capability of exploitation in Particle Swarm Optimization with the capability of
exploration in Gravitational Search Algorithm to synthesize both variants’ strength. Some
standard functions has been applied to compare the existing variant with other metaheuristics in
evolving best possible solution of the problem in the search space. The numerical solutions prove
that existing variant possesses a superior ability to escape from local optimums with faster
convergence than other metaheuristics.

To improve the performance of SPSO, a hybrid algorithm PSO (HPSOM) using mutation
process have been proposed by A.A.A. Esmin and S. Matwin (2013). The idea behind
developing this algorithm was to integrate PSO with genetic mutation method. An automatic
balanced between global and local searching abilities has been established in this process. On the
basis of numerical experiments, authors concluded that the proposed method significantly
outperformed SPSO in terms of solution stability, solution quality and convergence speed. K.
Deep et al. (2009) proposed a new variant of PSO namely mean PSO. This version was
constructed by replacing the two terms of velocity update equation of SPSO by two new terms
based on the linear combination of personal best and global best. The performance of proposed
variant was tested on many benchmark functions and results were compared with those obtained
with SPSO. On the basis of numerical results, authors observed that proposed variant
outperformed the standard PSO in terms of reliability, stability, efficiency and accuracy.
K. Meng et al. (2010) has proposed a newly modified variant of PSO namely quantum-inspired
particle swarm optimization (QPSO). The quality of modified variant was tested on five
benchmark problems and three system cases and compared with immune algorithm (IA), genetic
algorithm (GA), evolutionary programming (EP)) and other variant of PSO. On the basis of
promising solutions illustrating the accuracy of the proposed variant it was concluded that it can
be used as a reliable tool for solving ELD problems. A. Bhattacharya et al. (2010) presented a
biogeography-based optimization (BBO) variant to find the solution both convex and non-
convex economic load dispatch (ELD) problems of thermal plants. The proposed methodology
can take care of economic dispatch problems involving constraints such as prohibited operating
zones, transmission losses, multi-fuel options, ramp rate limits and valve-point loading. The
performance of the present algorithm was tested on four different test systems and compared
with results obtained with other existing variants of nature inspired algorithm. Considering the
quality of the solution obtained, this variant seems to be a promising alternative variant for
finding the solution of ELD problems in practical power system.

K. Deep et al. (2008) has solved the economic dispatch problem using original Particle Swarm
Optimization algorithm and two other improved variants, namely, Quadratic Approximation PSO
(qPSO) and Laplace Crossover PSO (LXPSO). Experimental solutions were also compared with
the earlier published recent results.

J.B. Park et al. (2007) has modified hybrid PSO approach used for finding the solution of
Economic Dispatch problems with valve-point effects. The existing approach was implemented
and was combined by two different approaches i.e. conventional PSO and Genetic Algorithm.
The simulation numerical results revealed that the proposed approach outperforms other state-of-
the-art algorithms as well as the conventional PSO method in solving ED problems with valve-
point effects.

N. Singh and S.B. Singh (2013) proposed a modified version of PSO known as a Modified
Standard Particle Swarm Optimization Algorithm (MSPSO). This approach has been developed
by updating the new update equation of the particle. This approach has been tested on several
number of benchmark problems and the results obtained have been compared with several
number metaheuristics in terms of minimum objective value, mean function value, standard
deviation, number of clocks and rate of success.

G. Harish (2016) has developed a hybrid approach called PSO-GA for finding the solution of
constrained optimization functions. In this approach, PSO operates in the direction of improving
the vector while the GA has been used for modifying the decision vectors using genetic
operators. The balance between the exploration and exploitation abilities has been further
improved by incorporating the genetic operators, namely, crossover and mutation in PSO
algorithm. The experimental solutions have been compared with those obtained through other
recent techniques existing in the literature.

In this study, we present a new hybrid model combining MGBPSO and GSA algorithms It will
be known as MGBPSOGSA. We use GSA approach on 23 standard functions to compare the
performance of hybrid algorithm with PSO and PSOGSA.

6.2 Particle Swarm Optimization (PSO)

The PSO algorithm was firstly introduced by R.C. Eberhart (Electrical Engineer) and J. Kennedy
(Social Psychologist) in 1995 and its fundamental judgment was primarily inspired by the
simulation of the social behaviour of animals such as bird flocking and fish schooling. While
searching for food, the birds are either scattered or go together before they settle the position
where they can find the food. While the birds are searching for food from one position to
another, there is always a bird that can smell the food very well, that is, the bird is observable of
the position where the food can be found, having the correct food resource message. Because
they are transmitting the message, particularly the useful message at any period while searching
the food from one position to another, the birds will finally flock to the position where food can
be found.

This approach is learnt from animal’s behaviour to calculate global optimization


functions/problems and every partner of the swarm/crowd is called a particle. In PSO technique
the position of each partner of the crowd in the global search space is updated by two
mathematical eqs. These mathematical eqs. are:
vik 1  vik  c1r1  pik  xik   c2 r2  gbest  xik 
(52)
xik 1  xik  vik 1 (53)

6.3 Gravitational Search Algorithm (GSA)

E. Rashedi et al. (2009) proposed, a new optimization variant based on the law of gravity and
mass interactions. In this approach, the searcher agents are collection of masses which interact
with each other based on the Newtonian gravity and the laws of motion. The Gravitational
Search Algorithm (GSA) was mathematically modeled as given below.

Now, consider a system with N agent. We define the movement of the i th member of the group
by:
1 d n
X  ( x ,..., x ,..., x ) for i  1, 2,..., N , (54)
i i i i

where x d shows the movement of i th member in the d


th
dimension.
i

The gravitation force from member j on member i is calculated as below:


G (t ).M pi (t )  M aj (t )
  x j (t )  xi (t ) 
d d d
Fij (t )  (55)
Rij (t )    

where t is the specific time, G (t ) is the gravitational constant at time t , M aj is the active

gravitational mass related to member j , Rij is the Euclidian distance between two members i and

j and M pi is the passive gravitational mass related to member i .

The gravitational constant G (t ) at time t is given as:


  g t 
 
G (t )  G0  e 
T 
(56)

where T is the maximum number of generations, t is the current generation, G0 is initial value

and g is the descending coefficient respectively.

The total force acting on a member i in a d -dimensional search area is calculated mathematically
as below:
d N
F (t )   rand j Fijd (t ) (57)
i
j 1

where rand  [0,1]


j

The acceleration of all the members should be calculated using eq. (58)

d F d (t )
ac (t )  i (58)
i M i (t )

where M
i
is the mass of object i .

The velocity and position of members are mathematicaly calculated by eq. (59) and eq. (60) :
d d d
v (t  1)  rand  v (t )  ac (t ) (59)
i i i i

d d d
x (t )  x (t )  v (t ) (60)
i i i

d d d
where ac , v
i i
and x
i
are the acceleration, velocity and position of a member i in a d -dimensional

search area.
The inertial mass m (t )
i
and gravitational mass M (t ) are
i
updated by using following mathematical

eqs. (61) and (62):


fiti (t ) worst (t )
m (t )  (61)
i best (t ) worst (t )

mi (t )
M (t )  (62)
i N
 m j (t )
j 1

6.4 MGBPSO Algorithm

N. Singh et al. (2016) introduced a newly modified approach of PSO called Mean Gbest Particle
Swarm Optimization. It was constructed by modifying the original velocity update equation of
Particle Swarm Optimization by mean. Its performance is compared with several metaheuristics
by testing it on a several number of classical and real life functions. Numerical and graphical
analysis of results shows that the existing approach outperforms the other metaheuristics in terms
of efficiency, reliability, accuracy and stability. The MGBPSO was mathematically modeled as
below:

vik 1  vik  c1r1   xik   c2 r2    gbest  xik  (63)

xik 1  xik  vik 1


(64)
where vik is the old velocity; c1 , c2 are acceleration constants, r1 , r2 are random coefficients,  is

mean, gbest is the best position of the neighbourhood particle, xik is old performance of the particle

in the search space and k is the time.

6.5 The Hybrid MGBPSOGSA Algorithm

E.G. Talbi (2002) proposed several hybridized techniques for heuristic approaches. Borrowing
the same idea efforts have been made to develop a technique by hybridizing Mean Gbest Particle
Swarm Optimization and Gravitation Search Algorithm. The hybrid is low-level because we
combine the functionality of both approaches. It is co-evolutionary because we do not apply both
approach one after another. In other words, algorithms are run in parallel. It is heterogeneous
because there are two distinct approaches that are involved to produce final solutions.

The MGBPSOGSA is mathematically modeled as:

The G (t ) is the gravitational constant at time t given as below:


  g t 
 
G (t )  G0  e 
T 
(65)

The inertial mass m (t )


i
and gravitational mass M (t ) are
i
updated using following mathematical

eqs. (66) and (67):

fiti (t ) worst (t )
m (t )  (66)
i best (t ) worst (t )

mi (t )
M (t )  (67)
i N
 m j (t )
j 1

The total force acting on a member i in a d -dimensional search area is calculated mathematically
using:
d N
F (t )   rand j Fijd (t ) (68)
i
j 1

The acceleration of all the members should be calculated using eq. (69)
d F d (t )
ac (t )  i (69)
i M i (t )

The velocity and position of members are mathematical calculated by eqs. (70) and (71):

vik 1  w  vik  c1r1  acid ( t )  c2 r2    gbest  xik  (70)

xik 1  xik  vik 1 (71)


In MGBPSOGSA, the quality of results is measured in the updating procedure. The members of
the population near a best optimal solution try to attract the other members which are exploring
the search area. When all the members of the crowd are near a best optimal solution, they move
very slowly. In that case   gbest help them to save the best optimal solution found so far, so it is
accessible anytime. Each member of population can observe the best optimal solution so far and
tend toward it.
------------------------------------------------------------------------------------------------------------
The pseudo code of MGBPSOGSA Algorithm is shown as below
------------------------------------------------------------------------------------------------------------
I) Initialize particle
II) Evaluate the fitness for all members in the search space
III) Update G using by equation (65) and gbest for the population in the search space

IV) Calculate mass, force and acceleration for all members of the crowd in the search space
using
eqs. (66), (67), (68) and (69)
V) Update velocity and position of all members using eqs. (70) and (71)
VI) Stopping criteria is satisfied then we do not go on, if it is not satisfied then we go to step-II.
VII) END
------------------------------------------------------------------------------------------------------------
6.6 Testing Functions

In this section, twenty three classical functions have been used to test the ability of proposed
approach. These functions can be divided into three different groups: Unimodal, Multimodal and
Fixed dimension multimodal functions.

6.7 Results and Discussions


Three variants of Particle Swarm Optimization (PSO, PSOGSA and MGBPSOGSA) have been
coded in MATLAB R2013a and implemented on Intel HD Graphics, 15.6” 3GB Memory, i5
Processor 430 M, 16.9 HD LCD, Pentium-Intel Core (TM) and 320 GB HDD. Size of swarm

(30), maximum number of iterations (1000), c1  0.5, c2  1.5 and gravitational constant G0  1. All

these parameter settings have been used to verify the performance of these metaheuristics.

The newly hybrid variant was run 30 times on each classical function. The statistical results
(standard deviation and average) are reported in Tables 7.1-7.2. For verifying the results, the
MGBPSOGSA approach is compared with PSO and PSOGSA algorithms. In addition, the
performance of proposed variant has also been tested on Iris dataset real life problem and
compared with PSO, GSA and PSOGSA algorithms.

Seven unimodal functions have been solved using PSO, PSOGSA and the proposed algorithm.
The results have been shown in Table 6.1. It can be observed from these results that the
performance of the proposed variant is better than other two algorithms in solving these
unimodal functions. The performance of the proposed algorithm has also been shown through
graphs.

Table 6.1. Statistical results of Algorithms on Unimodal functions

Sr. No. PSO PSOGSA MGBPSOGSA

     

1 4.7210e+03 1.1685e+03 4.8600e+03 959.1862 2.5809e+03 159.2038

2 4.6103e+10 1.5265e+09 7.5604e+10 2.3910e+09 4.5966e+10 1.4536e+09

3 7.5511e+03 1.2788e+04 6.6649e+03 7.6008e+03 7.8054e+03 464.2680

4 4.6653 37.4336 6.7202 31.9781 4.3642 0.4027

5 1.3112e+07 1.5915e+06 1.8221e+07 2.2841e+06 9.1640e+06 3.4607e+05

6 3.9006e+03 1.0768e+04 7.1572e+03 1.4164e+03 2.3779e+03 121.4011

7 5.3376 1.1071 7.7322 1.5322 5.1667 0.4021


Figures 6.1-6.7. Convergence Curves of PSO, PSOGSA and MGBPSOGSA variants on
Unimodal functions

It is worth mentioning that unimodal and multimodal benchmark functions have multiple local
optima. Not only has this but their number increased exponentially when we increase the
dimension of search. As a result the capability of variant is better tested for such functions.
Results given in Table 6.2 show the capability of proposed variant in solving seven multimodal
functions as compared to other two algorithms. It is observed that MGBPSOGSA is competent to
provide very effective solutions on the multimodal standard functions as well. One can observe
that proposed approach outperforms PSO and PSOGSA on the majority of the multimodal
functions.

Clearly the proposed variant can explore in a better way. The comparison of all these variants
has also been shown through graphical representations.

Table 6.2. Statistical results obtained using these Algorithms on Multimodal functions

Sr. PSO PSOGSA MGBPSOGSA


No.

     

8 380.4655 -6.8179e+03 504.3099 -6.9573e+03 13.7551 -2.6092e+03

9 39.5315 147.6625 41.8546 132.0376 41.5262 10.4930

10 0.2381 17.5991 1.1378 11.3216 1.4094 0.2447

11 39.1060 7.9719 43.5016 7.1978 17.1456 1.1171

12 1.9751e+07 1.9100e+06 3.1977e+07 3.1989e+06 1.7936e+07 5.7299e+05

13 9.0449e+07 1.0902e+07 4.1270e+07 4.3090e+06 3.0891e+07 1.1660e+06

14 12.4953 14.0299 12.1195 3.4198 2.0187 3.6135


Figures 6.8-6.13. Convergence Curve of PSO, PSOGSA and MGBPSOGSA variants on
Multimodal functions

The performance of the proposed variant has also been tested on 10 fixed dimensional
multimodal function. The statistical results have been reported in table 6.3. these results also
show the superiority of proposed variant as compared to other variants in solving multimodal
functions with fixed dimension.The comparison of all these variants in solving multimodal
functions with fixed dimension has also been shown through graphs.

Table 6.3. Statistical results obtained using three Algorithms on Fixed dimensional
Multimodal functions
Sr. PSO PSOGSA MGBPSOGSA
No.

     

14 0.0049 0.0206 0.0088 0.0017 0.0052 9.3266e-04

15 0.0013 -1.0310 0.0349 -1.0293 0.0403 -1.0281

16 0.0467 -1.0295 0.0548 -1.0286 0.0570 -1.0289

17 0.0409 -1.0286 0.0609 -1.0286 0.0221 -1.0278

18 6.7191 3.3293 2.6215 3.1678 0.4642 3.0512

19 0.0171 -3.8601 0.0589 -3.8564 0.0726 -3.8508

20 0.1291 -3.2967 0.0815 -3.1813 0.0807 -2.7978


21 0.6529 -10.0629 0.3260 -5.0557 0.1550 -3.3399

22 0.7422 -10.2999 0.1496 -2.7346 0.0599 -2.4347

23 0.0468 -1.8549 0.1592 -3.8155 0.2465 -4.1910


Figures 6.14-6.23. Convergence Curves of PSO, PSOGSA and MGBPSOGSA variants on
Fixed-dimension multimodal functions

6.8 Iris Dataset

This dataset is another well known testing dataset in the text. It consists of 4 attributes, 150
training samples, 150 test samples and 3 classes, Mirjalili, S. (2015). Efforts have been made to
use this data set on these three PSO variants. The convergence of PSO, GSA, PSOGSA and
MGBPSOGSA approaches has been plotted in Fig.7. 24-7.27.

It has been observed that these variants give the classification rate as MGBPSOGSA
(97.7767%), PSOGSA (98%), GSA (96.6667%) and PSO (95.3333%) respectively. The newly
hybrid approach gives a competent classification rate as compared to other metaheuristics.
Moreover the solutions verify that MGBPSOGSA approach has superior local optimal accuracy
and avoidance simultaneously.

Table 6.4. Experimental results for the iris dataset


Algorithms   Classification Min Value Max Value
Rate

MGBPSOGSA 0.0442 0.1204 97.7767% 0.0217 1.8229

PSOGSA 0.0479 0.1053 98% 0.0278 1.8157

GSA 0.0657 0.1159 96.6667% 0.0425 1.8853

PSO 0.0789 0.1022 95.3333% 0.0604 1.8602

The convergence pattern of these variants on Iris Data set have also been shown through graphs.
One can observe that the proposed variant is better than other variants from the point of view of
convergence.
Figures 6.24-6.27. Convergence Curves of PSO, GSA, PSOGSA and MGBPSOGSA
variants on Iris dataset problem
6.9 Economic Dispatch Problem

The comprehensive information of parameters selection of test system is power system input
data of 40 generating units with several other parameters in given in Table 6.5. The total demand
has been considered as 10,500 MW and the input parameters are as follows.

Population Size (200), Dimension (40), Confidence constants c1  c2 1.2 , Inertia factor (0.7),
Maximum number of evaluations for each run= 500000, Maximum numbers of runs = 200,
Acceptable error = 0.0 and Random numbers: r1kj , r2kj U (0,1) . All previous studies have also

been taken into account before applying the author’s improved approach for solving Economic
Dispatch Problem.

Efforts have been made to solve EDP using various approaches like Mean PSO, HGPSO,
HGAPSO, HPSOM, PSO, qPSO, GSA, BBO, HPSO, QPOS, MSPSO, PSOGSA and
MGBPSOGSA. The efficiency of these algorithms have been compared on the basis of
generation cost, average and standard deviation. From Table 6.1, it is clear that MGBPSOGSA
approach provided a superior and competent solution and signifies MGBPSOGSA’s higher
efficiency to find the solution of economic dispatch problem as compared to other
metaheuristics.

Table 6.5: Comparison of experimental results obtained by thirteen different modified


variants of Nature Inspired Algorithms

Method Unit Total Power Generation Mean Standard


Cost Deviation
(MW)

Mean PSO 40 10,500 153562.45 160177.5514 3762.512976

HGPSO 40 10,500 124797.13 126,855.70 1160.91

HGAPSO 40 10,500 122780.00 124,575.70 906.04

HPSOM 40 10,500 122112.40 124,350.87 977.75

PSO 40 10,500 121504.29 121632.3979 97.617794

qPSO 40 10,500 121500.93 121565.906 39.777128

GSA 40 10,500 121499.10 121590.899 47.888745


BBO 40 10,500 121479.50 121,512.06 --

HPSO (Park 2007) 40 10,500 121452.67 121537.1906 --

QPSO 40 10,500 121447.21 -- --

MSPSO 40 10,500 121433.73 121587.6508 109.929025

PSOGSA 40 10,500 121430.61 121593.3507 97.7563321

MGBPSOGSA 40 10,500 121427.22 121597.2207 107.605218

180000
Mean PSO
160000
HGPSO
140000 Generation Cost HGAPSO
120000 HPSOM
100000 PSO

80000 qPSO
GSA
60000
BBO
40000
QPSO
20000
MSPSO
0 PSOGSA
MGBPSOGSA

Figure 6.28: Comparison of generation output of each generator for thirteen different
Metaheuristics

6.10 Conclusion
In this chapter, a newly hybrid variant has been presented utilizing the strengths of Mean Gbest
Particle Swarm Optimization and Gravitational Search Algorithm. The main idea is to integrate
the abilities of Gravitation Search Algorithm in exploration and Mean Gbest PSO in exploitation.
This hybrid approach has been tested on 23 classical, Iris datasets and Economic Dispatch
problems and the performance of existing approach has been compared with performances of
several metaheuristics.
The hybrid strategy avoids premature convergence of the search process to local optimum and
provides better exploration of the search process. As a result, the proposed variant performs
better than several other metaheuristics.

You might also like