0% found this document useful (0 votes)
25 views25 pages

Aor Presentation

Uploaded by

tanwargaurav2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views25 pages

Aor Presentation

Uploaded by

tanwargaurav2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

DELHI TECHNOLOGICAL UNIVERSITY

Department of Mechanical Engineering

Advanced Operations Research – IEM 6405

Assignment

Submitted by: Submitted To:


Kumar Amit Dr. Girish Kumar
2k22/IEM/07 Professor, Dept. of
Subadeep Das Mechanical Engineering
2k22/IEM/12
Samir Kumar
2k22/IEM/09
Table of Content
S.No. List of Experiments Page

1 Title 1
2 Motivation 1

3 Problem Statement-1 3
4 Methodology 3
5 Verification 10
6 Code Implementation 12
7 Problem Statement-2 15
8 Methodology 15
9 Verification 19
10 Code Implementation 20
11 Project Outcome 23
➢ Title:

Utilizing Particle Swarm Optimization for Constrained Multi-Variable


Nonlinear Optimization

➢ Motivation:

The field of optimization is integral to a wide range of scientific, engineering,


and practical applications. In many real-world scenarios, optimization
problems involve multiple variables and are subject to complex constraints.
The need to efficiently and effectively solve such problems has spurred the
development of various optimization techniques, and one of the promising
methods for addressing constrained multi-variable nonlinear optimization
problems is Particle Swarm Optimization (PSO). This research proposal aims
to investigate the motivation for using PSO in this context and to explore its
potential advantages and applications.

Motivation:
1. Convergence Speed:
Particle Swarm Optimization (PSO) is renowned for its ability to converge to
the optimal solution relatively quickly, even in high-dimensional search
spaces. Compared to other optimization methods, this characteristic is
particularly valuable for real-time decision-making and resource allocation
in engineering, finance, and logistics, where time is often of the essence.

2. Scalability:
Constrained multi-variable nonlinear optimization problems often involve a
significant number of decision variables. PSO is highly scalable, making it a
suitable choice for high-dimensional optimization tasks. The swarm-based
nature of PSO allows it to handle large solution spaces efficiently.

1
3. Simplicity and Ease of Implementation:
PSO is relatively simple to implement, understand, and apply. This simplicity
is advantageous for researchers and practitioners who may not possess
deep expertise in optimization algorithms. This ease of use can lead to
quicker deployment of optimization solutions in practical applications.

4. Robustness:
PSO is known for its robustness in handling complex, non-convex, and multi-
modal optimization landscapes. The exploration and exploitation abilities of
the particle swarm enable it to search for the global optimum while
avoiding convergence to local optima. This robustness is particularly
valuable when dealing with nonlinear and multi-variable problems, where
the solution space can be challenging to navigate.

5. Adaptability:
PSO can be easily extended and customized to address specific problem
constraints. Various adaptation techniques, such as constraint handling
mechanisms and parameter tuning, can be incorporated to suit the unique
requirements of the optimization problem at hand.

6. Applications in Diverse Fields:


PSO has been successfully applied in a wide array of disciplines, including
engineering, biology, economics, and machine learning. Its versatility makes
it a suitable candidate for addressing constrained multi-variable nonlinear
optimization problems across these domains.

In conclusion, the motivation for using Particle Swarm Optimization in the


context of constrained multi-variable nonlinear optimization is clear. Its
convergence speed, scalability, simplicity, robustness, adaptability,
parallelism, and successful application in various fields make it an attractive
choice for solving real-world problems. This research proposal seeks to
delve deeper into the application of PSO in this context, exploring novel
approaches and techniques to further enhance its performance and
applicability.

2
➢ Problem Statement:

1. Objective Function:

Maximize 𝑓(𝑥1 , 𝑥2 )= 𝑥13 + 3𝑥23 + 3𝑥12 + 3𝑥22 + 24

2. Subject to:

−3 ≤ (𝑥1 , 𝑥2 ) ≤ 3

➢ Methodology:

Particle Swarm Optimization (PSO) is an optimization algorithm inspired by the


social behaviour of birds and fish. In PSO, a population of particles moves
through a search space to find the optimal solution.

Algorithm Overview:

1. Initialize a population of particles with random positions and velocities


within the search space.
2. Evaluate the fitness of each particle based on the problem's objective
function.
3. Update each particle's best-known position (pbest) and the global best-known
position (gbest) based on their current fitness.
4. Update the velocity and position of each particle using the following
equations:
a. Velocity update:
𝑽𝒊 (𝒕 + 𝟏) = 𝒘 × 𝑽𝒊 (𝒕) + 𝒄𝟏 × 𝒓𝟏 × (𝒑𝒃𝒆𝒔𝒕(𝒊) − 𝑿𝒊 (𝒕)) + 𝒄𝟐 × 𝒓𝟐 × (𝒈𝒃𝒆𝒔𝒕 − 𝑿𝒊 (𝒕))

3
b. Position update:

𝑿𝒊 (𝒕 + 𝟏) = 𝑿𝒊 (𝒕) + 𝑽𝒊 (𝒕 + 𝟏)

Where:
- 𝑽𝒊 (𝒕 + 𝟏) is the velocity of particle (i) at time (t+1).
- 𝑿𝒊 (𝒕 + 𝟏) is the position of particle (i) at time (t+1).
- (w) is the inertia weight, controlling the trade-off between
exploration and exploitation.
- 𝒄𝟏 and 𝒄𝟐 are acceleration coefficients that control the influence of
the particle's best-known position and the global best-known position,
respectively.
- (𝒓𝟏 ) and (𝒓𝟐 ) are random values in the range [0, 1].
- (𝒑𝒃𝒆𝒔𝒕(𝒊) ) is the best-known position of particle (i).
- (𝒈𝒃𝒆𝒔𝒕 ) is the global best-known position among all particles.
5. Repeat steps 2-4 for a predefined number of iterations or until a termination
condition is met (e.g., a satisfactory solution is found).

4
➢ Steps:

1. PSO parameter setting:

Randomly chosen

Population size= 5 𝒄𝟏 = 𝒄𝟐 = 𝟐 Inertia weight w= 0.8


Dimension of the problem=2
𝒓𝟏 = 𝟎. 𝟖𝟔𝟕𝟓 𝒓𝟐 = 𝟎. 𝟎𝟎𝟔𝟓

2. Iteration -1
a. Randomly choose velocity between 0 & 1.
b. Randomly choose position between -3 and 3
c. Calculate fitness value f(X).
d. Since there is no previous iteration, pbest = X.

particle x1 x2 f(X) v1 v2 pbest (x1) pbest(x2)

1 2.2288 1.6543 71.7665 0.2345 0.5876 2.2288 1.6543

2 -2.7863 -2.6234 -7.8588 0.3456 0.7823 -2.7863 -2.6234

3 1.9854 0.8734 47.9388 0.4567 0.6234 1.9854 0.8734

4 -0.2368 2.9888 131.0499 0.7654 0.8976 -0.2368 2.9888

5 2.9865 -1.8765 68.1356 0.9965 0.6643 2.9865 -1.8765

(𝒈𝒃𝒆𝒔𝒕 ) = ( -0.2368, 2.9888)


Maximum Fitness value= 131.0499

5
3. Iteration 2:

particle x1 x2 f(X) v1 v2 pbest (x1) pbest(x2)

1 2.3843 2.1417 97.8439 0.1555 0.4874 2.3843 2.1417

2 -2.4767 -1.9246 16.9356 0.3096 0.6988 -2.4767 -1.9246

3 2.3219 1.3996 66.7928 0.3365 0.5262 2.3219 1.3996

4 0.3755 3.0000 132.4760 0.6123 0.7181 0.3755 3.0000

5 3.0000 -1.2818 76.6109 0.7553 0.5947 3.0000 -1.2818

Now

(𝒈𝒃𝒆𝒔𝒕 ) = ( 0.3755, 3)
Maximum Fitness value= 132.4760

4. Iteration 3:

particle x1 x2 f(X) v1 v2 pbest (x1) pbest(x2)

1 2.4827 2.5428 126.5168 0.0983 0.4011 2.4827 2.5428

2 -2.1919 -1.3015 26.3500 0.2848 0.6231 -2.1919 -1.3015

3 2.5657 1.8414 89.5431 0.2439 0.4418 2.5657 1.8414

4 0.8654 3.0000 134.8947 0.4899 0.5745 0.8654 3.0000

5 3.0000 -0.7504 78.4217 0.5701 0.5314 3.0000 -0.7504

(𝒈𝒃𝒆𝒔𝒕 ) = ( 0.8654, 3)
Maximum Fitness value= 134.8947

6
5. Iteration-4:

particle x1 x2 f(X) v1 v2 pbest (x1) pbest(x2)

1 2.5403 2.8697 155.3510 0.0576 0.3268 2.5403 2.8697

2 -1.9243 -0.7472 28.4067 0.2676 0.5544 -1.9243 -0.7472

3 2.7387 2.2099 114.0720 0.1730 0.3685 2.7387 2.2099

4 1.2573 3.0000 138.7295 0.3919 0.4596 1.2573 3.0000

5 3.0000 -0.2765 78.1659 0.4283 0.4739 3.0000 -0.7504

(𝒈𝒃𝒆𝒔𝒕 ) = ( 2.5403, 2.8697)


Maximum Fitness value= 155.3510

6. Iteration-5:

particle x1 x2 f(X) v1 v2 pbest (x1) pbest(x2)

1 2.5697 3.0000 168.7798 0.0294 0.2632 2.5697 3.0000

2 -1.6689 -0.2550 27.8527 0.2554 0.4922 -1.9243 -0.7472

3 2.8579 2.5149 138.5398 0.1191 0.3051 2.8579 2.5149

4 1.5708 3.0000 143.2775 0.3135 0.3677 1.5708 3.0000

5 3.0000 -0.6770 78.4441 0.3200 -0.4005 3.0000 -0.6770

(𝒈𝒃𝒆𝒔𝒕 ) = ( 2.5697, 3.0000)


Maximum Fitness value= 168.7798

7
7. Iteration-6:

particle x1 x2 f(X) v1 v2 pbest (x1) pbest(x2)

1 2.6231 3.0000 170.6892 0.0367 0.2092 2.6231 3.0000

2 -1.8518 -0.6730 28.3817 -0.1995 -0.4163 -1.8518 -0.6730

3 2.9792 2.7623 163.1909 0.1047 0.2490 2.9792 2.7623

4 1.8644 3.0000 148.9075 0.2769 0.2928 1.8644 3.0000

5 3.0000 -0.9526 78.1290 0.2637 -0.2739 3.0000 -0.6787

(𝒈𝒃𝒆𝒔𝒕 ) = ( 2.6231, 3.0000)


Maximum Fitness value= 170.6892

8. Iteration-7:

particle x1 x2 f(X) v1 v2 pbest (x1) pbest(x2)

1 2.6524 3.0000 171.7653 0.0293 0.1673 2.6524 3.0000

2 -1.9532 -0.9582 28.1086 -0.1015 -0.2853 -1.8518 -0.6730

3 3.0000 2.9646 182.5313 0.0791 0.2023 3.0000 2.9646

4 2.0957 3.0000 154.3810 0.2314 0.2342 2.0957 3.0000

5 3.0000 -0.6451 78.4431 0.2061 0.3075 3.0000 -0.6451

(𝒈𝒃𝒆𝒔𝒕 ) = ( 3.0000,2.9646)
Maximum Fitness value= 182.5313

8
9. Iteration-8:

particle x1 x2 f(X) v1 v2 pbest (x1) pbest(x2)

1 2.6804 3.0000 172.8096 0.0280 0.1334 2.6804 3.0000

2 -1.7940 -0.6405 28.3239 0.1593 0.3177 -1.7940 -0.6405

3 3.0000 3.0000 186.0000 0.0633 0.1618 3.0000 3.0000

4 2.2926 3.0000 159.8180 0.1969 0.1869 2.2926 3.0000

5 3.0000 -0.3522 78.2411 0.1649 0.2929 3.0000 -0.6451

(𝒈𝒃𝒆𝒔𝒕 ) = ( 3.0000,3.0000)
Maximum Fitness value= 186.0000

10. Iteration-9:

particle x1 x2 f(X) v1 v2 pbest (x1) pbest(x2)

1 2.7069 3.0000 173.8161 0.0265 0.1067 2.7069 3.0000

2 -1.6043 -0.3390 27.8200 0.1897 0.3015 -1.7940 -0.6405

3 3.0000 3.0000 186.0000 0.0506 0.1295 3.0000 3.0000

4 2.4593 3.0000 165.0183 0.1667 0.1495 2.4593 3.0000

5 3.0000 -0.5825 78.4250 0.1319 -0.2303 3.0000 -0.5825

(𝒈𝒃𝒆𝒔𝒕 ) = ( 3.0000,3.0000)
Maximum Fitness value= 186.0000
Now, no improvement, hence our final fitness value is attained.

SOLUTION:
(𝒈𝒃𝒆𝒔𝒕 ) = ( 3,3)
Maximum Fitness value= 186

9
➢ Verification:
From Classical method using KKT conditions:

Given: Maximize 𝒇(𝒙𝟏 , 𝒙𝟐 )= 𝒙𝟑𝟏 + 𝟑𝒙𝟑𝟐 + 𝟑𝒙𝟐𝟏 + 𝟑𝒙𝟐𝟐 + 𝟐𝟒

Subject to: 𝑥1 ≥ −3, 𝑥1 ≤ 3, 𝑥2 ≥ −3, 𝑥2 ≤ 3

• Solution:
1. Convert objective function into Minimization form
Minimize (−𝒇(𝒙𝟏 , 𝒙𝟐 )= −𝒙𝟑𝟏 − 𝟑𝒙𝟑𝟐 − 𝟑𝒙𝟐𝟏 − 𝟑𝒙𝟐𝟐 − 𝟐𝟒)

2. Convert constraints into ≤ form


−𝑥1 ≤ 3 , 𝑥1 ≤ 3, −𝑥2 ≤ 3, 𝑥2 ≤ 3
3. KKT conditions:
1. Optimality Condition
𝑚
𝜕𝑓 𝜕𝑔𝑗 (𝑥)
+ ∑ 𝜆𝑖 =0 𝑖 = 1,2,3, … , 𝑛
𝜕𝑥𝑖 𝜕𝑥𝑖
𝑗=1

2. Feasibility Condition 𝑔𝑗 (𝑥) ≤ 𝑏𝑗 𝑗 = 1,2,3, … , 𝑚

3. Complimentary Slackness Condition

𝜆𝑗 (𝑔𝑗 (𝑥) − 𝑏𝑗 ) = 0 𝑗 = 1,2,3, … , 𝑚

4. Non-negativity of the lagrangian multiplier condition:


𝜆𝑗 ≥ 0

10
Optimality condition:
Here n=2, m=4 (𝜆1 , 𝜆2 , 𝜆3 , 𝜆4 )
−𝟑𝒙𝟐𝟏 − 𝟔𝒙𝟏 − 𝜆1 + 𝜆2 = 𝟎 … … … … … … … … … … . . … . . (𝟏)
−𝟗𝒙𝟐𝟐 − 𝟔𝒙𝟐 − 𝜆3 + 𝜆4 = 𝟎 … … … … … … … … … … . . … . . (𝟐)
Feasibility Condition
− 𝑥1 ≤ 3 ……………………………………………………………………… (3)
𝑥1 ≤ 3 ………………………………………………………………………. (4)
−𝑥2 ≤ 3 ………………………………………………………………………. (5)
𝑥2 ≤ 3 ……………………………………………………………………….. (6)
Complimentary slackness
𝜆1 (−𝑥1 − 3) = 0 …………………………………………………………… (7)
𝜆2 (𝑥1 − 3) = 0 …………………………………………………………… (8)
𝜆3 (−𝑥2 − 3) = 0 …………………………………………………………… (9)
𝜆4 (𝑥2 − 3) = 0 …………………………………………………………… (10)
Non-negativity Condition
𝜆1 , 𝜆2 , 𝜆3 , 𝜆4 ≥ 0

• One of the cases could be:


𝜆2 , 𝜆4 ≠ 0,
𝜆1 , 𝜆3 = 0
Putting value in eq. 8 & 10, we get, 𝑥1 = 3 & 𝑥2 = 3
Both values of 𝑥1 & 𝑥2 satisfy Feasibility eq. (3, 4, 5 & 6)
From eq. 1 we get 𝜆2 = 3(32 ) + 6(3) = 45 ≥ 0
From eq. 2 we get 𝜆4 = 9(32 ) + 6(3) = 99 ≥ 0
Thus, (3,3) is a KKT point and objective function is convex, hence we
will get minima point for −𝑓(𝑥)
Therefore 𝑓(3,3) = 186 is the point of maxima in the given range.
Hence our result for PSO is verified.
11
➢ Python Code Implementation:
import random

# Define the objective function to be maximized


def objective_function(x1, x2):
return -(x1**3 + 3*x2**3 + 3*x1**2 + 3*x2**2 + 24)

# PSO parameters
num_particles = 30
max_iter = 100
w = 0.8
c1 = 2.0
c2 = 2.0

# Define the search space


x1_min, x1_max = -3, 3
x2_min, x2_max = -3, 3

# Initialize the particles


particles = []
for _ in range(num_particles):
x1 = random.uniform(x1_min, x1_max)
x2 = random.uniform(x2_min, x2_max)
particles.append({
'position': [x1, x2],
'velocity': [0, 0],
'best_position': [x1, x2],
'best_value': objective_function(x1, x2),
})

# Initialize global best


global_best = min(particles, key=lambda p: p['best_value'])

# Main PSO loop


for iteration in range(max_iter):

12
for particle in particles:
r1, r2 = random.random(), random.random()
new_velocity = [
w * particle['velocity'][0] + c1 * r1 * (particle['best_position'][0] -
particle['position'][0]) + c2 * r2 * (global_best['best_position'][0] -
particle['position'][0]),
w * particle['velocity'][1] + c1 * r1 * (particle['best_position'][1] -
particle['position'][1]) + c2 * r2 * (global_best['best_position'][1] -
particle['position'][1])
]

new_position = [
particle['position'][0] + new_velocity[0],
particle['position'][1] + new_velocity[1]
]

# Check if the new position is within the search space


new_position[0] = max(x1_min, min(x1_max, new_position[0]))
new_position[1] = max(x2_min, min(x2_max, new_position[1]))

# Update particle information


new_value = objective_function(new_position[0], new_position[1])
if new_value < particle['best_value']:
particle['best_position'] = new_position
particle['best_value'] = new_value

# Update global best


if new_value < global_best['best_value']:
global_best['best_position'] = new_position
global_best['best_value'] = new_value

particle['position'] = new_position
particle['velocity'] = new_velocity

print(f"Iteration {iteration + 1}: Best Value = {global_best['best_value']}")

13
print(f"Optimal solution found at x1 = {global_best['best_position'][0]}, x2 =
{global_best['best_position'][1]}")
print(f"Optimal value = {global_best['best_value']}")

➢ Output:
Optimal solution found at x1 = 3, x2 = 3
Optimal value = -186

14
➢ Problem Statement 2:

1. Objective Function:

Minimize 𝑓(𝑥1 , 𝑥2 )= 𝑥13 + 3𝑥23 + 3𝑥12 + 3𝑥22 + 24𝑒 𝑥1

2. Subject to:

−3 ≤ (𝑥1 , 𝑥2 ) ≤ 3

3. PSO parameter setting:

Randomly chosen

Population size= 5 𝒄𝟏 = 𝒄𝟐 = 𝟐 Inertia weight w= 0.8


Dimension of the problem=2
𝒓𝟏 = 𝟎. 𝟖𝟔𝟕𝟓 𝒓𝟐 = 𝟎. 𝟎𝟎𝟔𝟓

4. Iteration -1
e. Randomly choose velocity between 0 & 1.
f. Randomly choose position between -3 and 3
g. Calculate fitness value f(X).
h. Since there is no previous iteration, pbest = X.

15
pbest
particle x1 x2 f(X) v1 v2 pbest(x2)
(x1)
1 2.2288 1.6543 270.6956 0.2345 0.5876 2.2288 1.6543

2 -2.7863 -2.6234 -30.3792 0.3456 0.7823 -2.7863 -2.6234

3 1.9854 0.8734 198.7058 0.4567 0.6234 1.9854 0.8734

4 -0.2368 2.9888 125.9895 0.7654 0.8976 -0.2368 2.9888

5 2.9865 -1.8765 519.7245 0.9965 0.6643 2.9865 -1.8765

(𝒈𝒃𝒆𝒔𝒕 ) = ( -2.7863, -2.6234)


Maximum Fitness value= -30.3792

5. Iteration 2:
pbest
particle x1 x2 f(X) v1 v2 (x1) pbest(x2)

1 2.3512 2.0688 320.9403 0.1224 0.4145 2.2288 1.6543

2 -2.5098 -1.9976 -6.9030 0.2765 0.6258 -2.7863 -2.6234

3 2.2887 1.3267 276.6860 0.3033 0.4533 1.9854 0.8734

4 0.3424 3.0000 142.1908 0.5792 0.6451 -0.2368 2.9888

5 3.0000 -1.3548 534.0995 0.7222 0.5217 2.9865 -1.8765

Now, no improvement, hence

(𝒈𝒃𝒆𝒔𝒕 ) = ( -2.7863, -2.6234)


Maximum Fitness value= -30.3792

16
6. Iteration 3:

particle x1 x2 f(X) v1 v2 pbest (x1) pbest(x2)

1 2.1700 1.6202 255.1723 -0.1812 -0.4485 2.1700 1.6202


-2.7863 -2.6234
2 -2.7719 -2.5909 -28.7826 -0.2621 -0.5933

3 1.9391 0.8515 189.4665 -0.3496 -0.4752 1.9391 0.8515


-0.2368 2.9888
4 -0.2398 3.0000 127.0411 -0.5822 0.4236
2.9865 -1.8765
5 3.0000 -1.8591 527.1455 0.4791 -0.5043

Here also, no improvement, hence

(𝒈𝒃𝒆𝒔𝒕 ) = ( -2.7863, -2.6234)


Maximum Fitness value= -30.3792

7. Iteration-4:
pbest
particle x1 x2 f(X) v1 v2 (x1) pbest(x2)

1 1.9605 1.2063 199.1752 -0.2094 -0.4140 1.9605 1.2063

2 -3.0000 -3.0000 -52.8051 -0.2348 -0.5315 -3.0000 -3.0000

3 1.5980 0.4262 131.1595 -0.3411 -0.4253 1.5980 0.4262

4 -0.7334 3.0000 120.7453 -0.4936 0.2463 -0.7304 3.0000

5 3.0000 -2.3027 515.3309 0.2846 -0.4436 3.0000 -2.3201

Improvement

(𝒈𝒃𝒆𝒔𝒕 ) = ( -3, -3)


Maximum Fitness value= -52.8051

17
8. Iteration-5:
pbest
particle x1 x2 f(X) v1 v2 (x1) pbest(x2)

1 1.7285 3.0000 257.3043 -0.2320 -0.3859 1.9605 1.2063

2 -3.0000 -3.0000 -52.8051 -0.1879 -0.4252 -3.0000 -3.0000

3 1.2653 0.0414 91.8953 -0.3327 -0.3848 1.2653 0.0414

4 -1.1525 3.0000 118.0340 -0.4191 0.1191 -1.1495 3.0000

5 3.0000 -2.6969 499.0286 0.1497 -0.3942 3.0000 -2.7143

Now, no improvement in fitness as well as position of 𝒈𝒃𝒆𝒔𝒕


(𝒈𝒃𝒆𝒔𝒕 ) = ( -3, -3)
Maximum Fitness value= -52.8051

Hence our final fitness value is attained.

SOLUTION:
(𝒈𝒃𝒆𝒔𝒕 ) = ( -3, -3)
Minimum value= -52.8051

18
➢ Verification:
From Classical method using KKT conditions:

Given: Minimize 𝒇(𝒙𝟏 , 𝒙𝟐 )= 𝒙𝟑𝟏 + 𝟑𝒙𝟑𝟐 + 𝟑𝒙𝟐𝟏 + 𝟑𝒙𝟐𝟐 + 𝟐𝟒𝒆𝒙𝟏

Subject to: 𝑥1 ≥ −3, 𝑥1 ≤ 3, 𝑥2 ≥ −3, 𝑥2 ≤ 3

Optimality condition:
Here n=2, m=4 (𝜆1 , 𝜆2 , 𝜆3 , 𝜆4 )
𝟑𝒙𝟐𝟏 + 𝟔𝒙𝟏 + 𝟐𝟒𝒆𝒙𝟏 − 𝜆1 + 𝜆2 = 𝟎 … … … … … … … … … … . . … . . (𝟏)
𝟗𝒙𝟐𝟐 + 𝟔𝒙𝟐 − 𝜆3 + 𝜆4 = 𝟎 … … … … … … … … … … . . … … … … … . . (𝟐)
Feasibility Condition
− 𝑥1 ≤ 3 ……………………………………………………………………… (3)
𝑥1 ≤ 3 ………………………………………………………………………. (4)
−𝑥2 ≤ 3 ………………………………………………………………………. (5)
𝑥2 ≤ 3 ……………………………………………………………………….. (6)
Complimentary slackness
𝜆1 (−𝑥1 − 3) = 0 …………………………………………………………… (7)
𝜆2 (𝑥1 − 3) = 0 …………………………………………………………… (8)
𝜆3 (−𝑥2 − 3) = 0 …………………………………………………………… (9)
𝜆4 (𝑥2 − 3) = 0 …………………………………………………………… (10)
Non-negativity Condition
𝜆1 , 𝜆2 , 𝜆3 , 𝜆4 ≥ 0

• One of the cases could be:


𝜆2 , 𝜆4 = 0,
𝜆1 , 𝜆3 ≠ 0
Putting value in eq. 7 & 9, we get, 𝑥1 = −3 & 𝑥2 = −3

19
Both values of 𝑥1 & 𝑥2 satisfy Feasibility eq. (3, 4, 5 & 6)
From eq. 1 we get 𝜆1 = 3(9) + 6(−3) + 24𝑒 −3 = 10.19 ≥ 0
From eq. 2 we get 𝜆3 = 9(9) + 6(−3) = 63 ≥ 0
Thus, (-3,-3) is a KKT point and objective function is convex, hence we
will get minima point for 𝑓(𝑥)
Therefore 𝑓(−3, −3) =-52.8051 is the point of minima in the given
range.
Hence our result for PSO is verified.

➢ Python Code Implementation:

import random
import math

# Define the function to be minimized


def objective_function(x1, x2):
return x1**3 + 3*x2**3 + 3*x1**2 + 3*x2**2 + 24*math.exp(x1)

# Define the PSO algorithm


def particle_swarm_optimization(objective_function, num_particles,
num_iterations, range_min, range_max, inertia_weight, cognitive_weight,
social_weight):
# Initialize the particles and their velocities within the specified range
particles = []
velocities = []
global_best_position = None
global_best_value = float('inf')

20
for _ in range(num_particles):
x1 = random.uniform(range_min, range_max)
x2 = random.uniform(range_min, range_max)
particles.append((x1, x2))
velocities.append((0, 0))

for _ in range(num_iterations):
for i in range(num_particles):
x1, x2 = particles[i]

# Evaluate the objective function


current_value = objective_function(x1, x2)

# Update personal best


if current_value < global_best_value:
global_best_value = current_value
global_best_position = (x1, x2)

# Update particle velocity and position


v1, v2 = velocities[i]
r1, r2 = random.random(), random.random()
v1 = inertia_weight * v1 + cognitive_weight * r1 *
(global_best_position[0] - x1) + social_weight * r2 * (global_best_position[0] -
x1)

21
v2 = inertia_weight * v2 + cognitive_weight * r1 *
(global_best_position[1] - x2) + social_weight * r2 * (global_best_position[1] -
x2)
x1 += v1
x2 += v2

# Ensure particles stay within the specified range


x1 = max(range_min, min(x1, range_max))
x2 = max(range_min, min(x2, range_max))

particles[i] = (x1, x2)


velocities[i] = (v1, v2)

return global_best_position, global_best_value

# Parameters for the PSO algorithm


num_particles = 30
num_iterations = 100
range_min = -3
range_max = 3
inertia_weight = 0.8
cognitive_weight = 2
social_weight = 2

# Run the PSO algorithm

22
best_position, best_value = particle_swarm_optimization(objective_function,
num_particles, num_iterations, range_min, range_max, inertia_weight,
cognitive_weight, social_weight)

print("Best position:", best_position)


print("Best value:", best_value)

➢ Output:
Best position: (-3, -3)
Best value: -52.805110359171266

➢ Project Outcome:

From the above two illustrations we can see that the PSO algorithm gives
the optimum point very close to the real optimum.

23

You might also like