0% found this document useful (0 votes)
32 views22 pages

Optifile

The document outlines practical experiments on various optimization techniques in AI and ML, including Genetic Algorithm, Simulated Annealing, Ant Colony Optimization, and Particle Swarm Optimization, all implemented using MATLAB. Each experiment includes a problem statement, theoretical background, steps, hyperparameters, and code implementations, demonstrating the effectiveness of these algorithms in solving optimization problems. The conclusions highlight the strengths of each technique in finding optimal solutions and their applicability to real-world challenges.

Uploaded by

Krishna kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views22 pages

Optifile

The document outlines practical experiments on various optimization techniques in AI and ML, including Genetic Algorithm, Simulated Annealing, Ant Colony Optimization, and Particle Swarm Optimization, all implemented using MATLAB. Each experiment includes a problem statement, theoretical background, steps, hyperparameters, and code implementations, demonstrating the effectiveness of these algorithms in solving optimization problems. The conclusions highlight the strengths of each technique in finding optimal solutions and their applicability to real-world challenges.

Uploaded by

Krishna kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

OPTIMIZATION TECHNIQUES IN AI

& ML
EAEPE21
PRACTICAL FILE
2024-25
Submitted by: Submitted to:

Name: Parth Sahu Dr. Richa Bhatia

Roll No.: 2022UEA6524

Branch: ECAM

Section: 01
Semester: 6

Netaji Subhas University of Technology


Geeta Colony, New Delhi
Delhi – 110031
INDEX

S. No. Aim Date Sign

1 To implement and analyze the Genetic Algorithm (GA) for


solving an optimization problem using MATLAB.

2 To implement and analyze the Simulated Annealing (SA)


algorithm for solving an optimization problem using MATLAB.

3 To implement and analyze the Ant Colony Optimization (ACO)


algorithm for solving the Travelling Salesman Problem (TSP)
using MATLAB.

4 To implement and analyze the Particle Swarm Optimization


(PSO) algorithm for solving an optimization problem using
MATLAB

5 To implement and analyze the Cuckoo Search algorithm for


solving an optimization problem using MATLAB.
EXPERIMENT 1
Aim: To implement and analyze the Genetic Algorithm (GA) for solving an optimization problem using
MATLAB.

Problem Statement: Genetic Algorithms (GA) are population-based optimization algorithms

inspired by natural selection. They optimize a function by iteratively evolving a population of candidate
solutions using selection, crossover, and mutation.
In this experiment, we aim to optimize the OneMax problem, where the objective function is to maximize
the number of ones in a binary chromosome. The optimization function is defined as:

where x is a binary vector of length n, and the goal is to maximize f(x), which represents
the total number of ones in the chromosome.

Software Used: MATLAB


Theory:
GA consists of a population of candidate solutions (chromosomes), where each individual un
dergoes selection, crossover, and mutation to evolve towards an optimal solution.
Steps of GA:
1. Initialize a population of individuals with random binary chromosomes.
2. Evaluate the fitness of each individual based on the objective function.
3. Select individuals using roulette wheel selection.
4. Apply crossover to generate offspring.
5. Mutate offspring to maintain genetic diversity.
6. Repeat until a stopping criterion (such as a maximum number of generations) is met.
Hyperparameters used:
• Population Size: 20
• Chromosome Length: 10 bits
• Maximum Generations: 50
• Mutation Rate: 0.05
• Crossover Probability: 0.

Code:

% Parameters
populationSize = 20; % Number of individuals
chromosomeLength = 10; % Number of bits per individual
maxGenerations = 50; % Number of generations
mutationRate = 0.05; % Mutation probability
population = randi([0, 1], populationSize, chromosomeLength);

% Display Initial Population (First 5)


disp('Initial Population (First 5):');
disp(population(1:5, :));

% Fitness Function (OneMax - Maximizing number of 1s)


fitnessFunction = @(x) sum(x, 2);

% Track fitness values for visualization


fitnessHistory = zeros(maxGenerations, 1);
bestIndividual = zeros(1, chromosomeLength);
bestFitness = -inf;

for generation = 1:maxGenerations


% Evaluate Fitness
fitnessValues = fitnessFunction(population);
[maxFitness, maxIndex] = max(fitnessValues);
fitnessHistory(generation) = maxFitness; % Store best fitness

% Track best individual


if maxFitness > bestFitness
bestFitness = maxFitness;
bestIndividual = population(maxIndex, :);
end

% Selection (Roulette Wheel Selection)


probabilities = fitnessValues / sum(fitnessValues);
cumProb = cumsum(probabilities);
selectedIndices = zeros(populationSize, 1);
for i = 1:populationSize
r = rand;
selectedIndices(i) = find(cumProb >= r, 1, 'first');
end
selectedPopulation = population(selectedIndices, :);

% Crossover (Single Point)


for i = 1:2:populationSize-1
if rand < 0.8 % Crossover probability
crossoverPoint = randi([1, chromosomeLength-1]);
temp = selectedPopulation(i, crossoverPoint+1:end);
selectedPopulation(i, crossoverPoint+1:end) = ...
selectedPopulation(i+1, crossoverPoint+1:end);
selectedPopulation(i+1, crossoverPoint+1:end) = temp;
end
end

% Mutation
mutationMask = rand(size(selectedPopulation)) < mutationRate;
selectedPopulation = xor(selectedPopulation, mutationMask);

% Update Population
population = selectedPopulation;
end

% Display Final Population (First 5)


disp('Final Population (First 5):');
disp(population(1:5, :));
% Display Best Individual

disp('Best Individual Found:');


disp(bestIndividual);
disp(['Best Fitness: ', num2str(bestFitness)]);
Output:

Conclusion

The implementation of Genetic Algorithm for this problem demonstrated:

• The effectiveness of evolutionary strategies in optimization.


• The role of selection, crossover, and mutation in improving solutions.
• The ability of GA to evolve towards an optimal solution efficiently.

This experiment highlights how Genetic Algorithms can be applied to various optimization problems
EXPERIMENT 2
Aim: To implement and analyze the Simulated Annealing (SA) algorithm for solving an optimization
problem using MATLAB

Problem Statement: Simulated Annealing is a probabilistic technique used to approximate the global
optimum of a given function. In this experiment, we optimize a binary string by maximizing a weighted
sum function. The optimization function is defined as:

where x is a binary vector of length n, and w is a vector of random positive weights. The goal is to maximize f(x),
which represents the weighted sum of the binary chromosome.

Software Used: MATLAB


Theory:
Simulated Annealing (SA) is inspired by the physical annealing process in metallurgy, where materials are
heated and then slowly cooled to reach a stable state.
Steps of SA:
1. Initialize with a random solution and set an initial high temperature.
2. Evaluate the fitness of the current solution.
3. Generate a new candidate solution by making small modifications to the current one.
4. Accept or reject the new solution based on probability determined by the temperature.
5. Gradually decrease the temperature and repeat the process until a stopping criterion is met.
Acceptance Probability:
A new solution is accepted if:

where ∆𝑓 is the change in fitness and 𝑇 is the current temperature. This probabilistic
acceptance allows escaping local minima.
Hyperparameters used:
• Chromosome Length: 10 bits
• Initial Temperature: 100
• Cooling Rate: 0.95
• Minimum Temperature: 0.01
• Maximum Iterations per Temperature Level: 1000

Code:

% Parameters
chromosomeLength = 10; % Number of bits in the solution
initialTemperature = 100; % Starting temperature
coolingRate = 0.95; % Cooling factor
minTemperature = 0.01; % Stopping condition
maxIterations = 1000; % Maximum number of iterations per temperature level

% Define weight vector for optimization (random positive weights)


weights = randi([1, 10], 1, chromosomeLength);

% Fitness function (maximize weighted sum)


fitnessFunction = @(x) sum(x .* weights);

% Generate Initial Solution (Random 0s and 1s)


currentSolution = randi([0, 1], 1, chromosomeLength);
currentFitness = fitnessFunction(currentSolution);

bestSolution = currentSolution;
bestFitness = currentFitness;

temperature = initialTemperature;

% Track fitness values for visualization


fitnessHistory = [];
temperatureHistory = [];

% Simulated Annealing Loop


while temperature > minTemperature
for iteration = 1:maxIterations
% Generate a new candidate solution by flipping a random bit
newSolution = currentSolution;
flipIndex = randi([1, chromosomeLength]);
newSolution(flipIndex) = ~newSolution(flipIndex);

% Evaluate new solution


newFitness = fitnessFunction(newSolution);

% Determine acceptance probability


deltaFitness = newFitness - currentFitness;
if deltaFitness > 0 || rand < exp(deltaFitness / temperature)
currentSolution = newSolution;
currentFitness = newFitness;
end

% Track the best solution


if currentFitness > bestFitness
bestSolution = currentSolution;
bestFitness = currentFitness;
end

% Store fitness history


fitnessHistory = [fitnessHistory, bestFitness];
temperatureHistory = [temperatureHistory, temperature];
end

% Cool down the temperature


temperature = temperature * coolingRate;
end

% Display Results
disp('Best Solution Found:');
disp(bestSolution);
disp(['Best Fitness: ', num2str(bestFitness)]);

disp('Weight Vector:');
disp(weights);

% Visualization
figure;
subplot(2,1,1);
plot(fitnessHistory, 'b-', 'LineWidth', 1.5);
xlabel('Iterations');
ylabel('Best Fitness');
title('Fitness Convergence Over Iterations');
grid on;

subplot(2,1,2);
plot(temperatureHistory, 'r-', 'LineWidth', 1.5);
xlabel('Iterations');
ylabel('Temperature');
title('Temperature Cooling Schedule');
grid on;
Output:

Conclusion:

The implementation of Simulated Annealing for this problem demonstrated:

• The ability to find an optimal or near-optimal solution through probabilistic acceptance.


• The trade-off between exploration and exploitation in optimization.
• The effectiveness of temperature-controlled probabilistic selection in avoiding local minima.

This experiment highlights how SA can be applied to combinatorial optimization problems efficiently.
EXPERIMENT 3
Aim: To implement and analyze the Ant Colony Optimization (ACO) algorithm for solving the
Travelling Salesman Problem (TSP) using MATLAB

Problem Statement: The Travelling Salesman Problem (TSP) aims to find the shortest possible
route that visits each city exactly once and returns to the starting city. Formally, given a set of cities C =
{𝑐1, 𝑐2, . . . , 𝑐𝑛} and a distance function 𝑑(𝑐𝑖, 𝑐𝑗) between cities 𝑐𝑖 and 𝑐𝑗, the objective is to subject to the
constraint that each city is visited exactly once.

Software Used: MATLAB


Theory:
ACO is inspired by the foraging behavior of ants, which deposit pheromones to mark paths to food sources,
allowing other ants to follow and refine these paths over time
Steps of ACO:
1. Initialize pheromone levels and generate a random distance matrix.
2. Each ant constructs a solution using probabilistic transitions based on pheromone and heuristic
information.
3. Evaluate all solutions and update pheromone levels based on the best solutions found.
4. Repeat the process until convergence or maximum iterations are reached.
Hyperparameters used:
• Number of Cities (numCities): 10
• Number of Ants (numAnts): 20
• Influence of Pheromone (𝛼): 1
• Influence of Heuristic Information (𝛽): 2
• Pheromone Evaporation Rate (evaporationRate): 0.5
• Number of Iterations (numIterations): 100
• Initial Pheromone Level (initialPheromone): 1
Code:

% MATLAB implementation of ACO for TSP


clc; clear; close all;

% Parameters
numCities = 10;
numAnts = 20;
alpha = 1; beta = 2;
evaporationRate = 0.5;
numIterations = 100;
initialPheromone = 1;

% Generate Distance Matrix


distanceMatrix = randi([10, 100], numCities, numCities);
distanceMatrix = triu(distanceMatrix, 1);
distanceMatrix = distanceMatrix + distanceMatrix'; % Symmetric matrix

% Initialize Pheromone Matrix


pheromoneMatrix = initialPheromone * ones(numCities, numCities);

% ACO Main Loop


bestTour = [];
bestTourLength = inf;

for iteration = 1:numIterations


allTours = zeros(numAnts, numCities);
allTourLengths = zeros(numAnts, 1);

% Construct Solutions
for ant = 1:numAnts
visited = false(1, numCities);
tour = zeros(1, numCities);
tour(1) = randi(numCities);
visited(tour(1)) = true;

for step = 2:numCities


currentCity = tour(step-1);
probabilities = (pheromoneMatrix(currentCity, :) .^ alpha) ...
.* ((1 ./ distanceMatrix(currentCity, :)) .^ beta);
probabilities(visited) = 0;
probabilities = probabilities / sum(probabilities);
nextCity = find(cumsum(probabilities) >= rand, 1);
tour(step) = nextCity;
visited(nextCity) = true;
end
allTours(ant, :) = tour;
allTourLengths(ant) = sum(arrayfun(@(i) distanceMatrix(tour(i), tour(i+1)), 1:numCities-1)) ...
+ distanceMatrix(tour(end), tour(1));
end

% Update Best Solution


[minTourLength, minIndex] = min(allTourLengths);
if minTourLength < bestTourLength
bestTourLength = minTourLength;
bestTour = allTours(minIndex, :);
end

% Update Pheromone Levels


pheromoneMatrix = (1 - evaporationRate) * pheromoneMatrix;
for ant = 1:numAnts
for step = 1:numCities-1
fromCity = allTours(ant, step);
toCity = allTours(ant, step+1);
pheromoneMatrix(fromCity, toCity) = pheromoneMatrix(fromCity, toCity) + 1 /
allTourLengths(ant);
pheromoneMatrix(toCity, fromCity) = pheromoneMatrix(toCity, fromCity) + 1 /
allTourLengths(ant); % symmetric
end
% Add pheromone for returning to start
pheromoneMatrix(allTours(ant, end), allTours(ant, 1)) = ...
pheromoneMatrix(allTours(ant, end), allTours(ant, 1)) + 1 / allTourLengths(ant);
pheromoneMatrix(allTours(ant, 1), allTours(ant, end)) = ...
pheromoneMatrix(allTours(ant, 1), allTours(ant, end)) + 1 / allTourLengths(ant);
end
end

% Display Best Solution


disp('Best Tour Found:');
disp(bestTour);
disp(['Best Tour Length: ', num2str(bestTourLength)]);

% Visualization
figure;
graphObj = graph(distanceMatrix);
plot(graphObj, 'Layout', 'force');
title('Ant Colony Optimization - TSP Solution');
Output:

Conclusion:

The implementation of ACO for the TSP demonstrated:

• The effectiveness of pheromone-based probabilistic decision-making in finding optimized solutions.


• The balance between exploration (new paths) and exploitation (reinforcing good paths).
• The capability of ACO to solve combinatorial optimization problems efficiently.

This experiment highlights the potential of nature-inspired algorithms for solving real-world optimization
problems
EXPERIMENT 4
Aim: To implement and analyze the Particle Swarm Optimization (PSO) algorithm for solving an

optimization problem using MATLAB.

Problem Statement: The optimization function used in this experiment is the Sphere function,

defined as:

where x is a vector of decision variables, and the goal is to find 𝑥 that minimizes 𝑓(𝑥).

Software Used: MATLAB


Theory:
PSO consists of a swarm of particles, each representing a candidate solution. The particles move in the
search space, influenced by their own best-known position and the global best-known position of the swarm.
Steps of PSO:
1. Initialize a swarm of particles with random positions and velocities.
2. Evaluate the fitness of each particle based on the objective function.
3. Update each particle’s velocity based on its own best position and the global best position.
4. Update each particle’s position by adding the velocity to its current position.
5. Repeat until a stopping criterion (such as a maximum number of iterations) is met.
Hyperparameters used:
The PSO algorithm relies on the following Hyperparameters used:
• Number of Particles (numParticles): 30
• Dimensionality of the Problem (numDimensions): 10
• Maximum Iterations (maxIterations): 50
• Inertia Weight (inertiaWeight): 0.7
• Cognitive Coefficient (c1): 1.5
• Social Coefficient (c2): 1.5
Code:

% Problem Parameters
numItems = 20; % Number of items
capacity = 50; % Maximum weight the knapsack can hold
rng(1); % For reproducibility
values = randi([10, 100], 1, numItems); % Random values between 10 and 100
weights = randi([1, 20], 1, numItems); % Random weights between 1 and 20

% PSO Parameters
numParticles = 30; % Number of particles
maxIterations = 100; % Maximum iterations
inertiaWeight = 0.7; % Inertia weight
c1 = 1.5; % Cognitive coefficient
c2 = 1.5; % Social coefficient

% Initialize Particles
positions = rand(numParticles, numItems); % Continuous positions
velocities = rand(numParticles, numItems) * 0.1; % Small initial velocities

% Convert positions to binary using sigmoid function


binaryPositions = rand(numParticles, numItems) > 0.5;

% Fitness Function: Maximize total value while keeping weight <= capacity
fitnessFunction = @(x) computeKnapsackFitness(x, values, weights, capacity);

% Evaluate initial fitness


fitnessValues = arrayfun(@(i) fitnessFunction(binaryPositions(i, :)), 1:numParticles)';

% Initialize personal and global bests


personalBestPositions = binaryPositions;
personalBestFitness = fitnessValues;
[globalBestFitness, bestIndex] = max(fitnessValues);
globalBestPosition = binaryPositions(bestIndex, :);

% Store fitness history


fitnessHistory = zeros(maxIterations, 1);

for iter = 1:maxIterations


% Update velocities
velocities = inertiaWeight * velocities + ...
c1 * rand(numParticles, numItems) .* (personalBestPositions - binaryPositions) + ...
c2 * rand(numParticles, numItems) .* (globalBestPosition - binaryPositions);

% Update positions using sigmoid function


probabilities = 1 ./ (1 + exp(-velocities)); % Sigmoid function
binaryPositions = rand(numParticles, numItems) < probabilities; % Convert to binary
% Evaluate fitness
fitnessValues = arrayfun(@(i) fitnessFunction(binaryPositions(i, :)), 1:numParticles)';

% Update personal bests


updateMask = fitnessValues > personalBestFitness;
personalBestFitness(updateMask) = fitnessValues(updateMask);
personalBestPositions(updateMask, :) = binaryPositions(updateMask, :);

% Update global best


[currentBestFitness, bestIndex] = max(fitnessValues);
if currentBestFitness > globalBestFitness
globalBestFitness = currentBestFitness;
globalBestPosition = binaryPositions(bestIndex, :);
end

% Store best fitness for visualization


fitnessHistory(iter) = globalBestFitness;
end

% Display results
disp('Best Items Selected (1 = Selected, 0 = Not Selected):');
disp(globalBestPosition);
disp(['Best Total Value: ', num2str(globalBestFitness)]);
disp(['Total Weight: ', num2str(sum(weights .* globalBestPosition))]);

% Plot Fitness Convergence


figure;
plot(1:maxIterations, fitnessHistory, 'r-', 'LineWidth', 1.5);
xlabel('Iterations');
ylabel('Best Fitness');
title('PSO for Knapsack Problem - Fitness Convergence');
grid on;

% Fitness Function
function fitness = computeKnapsackFitness(solution, values, weights, capacity)
totalValue = sum(solution .* values);
totalWeight = sum(solution .* weights);

% Penalize solutions that exceed capacity


if totalWeight > capacity
fitness = 0; % Invalid solution
else
fitness = totalValue; % Valid solution
end
end
Output:

Conclusion:

The implementation of Particle Swarm Optimization for this problem demonstrated:

• The effectiveness of swarm intelligence in optimization.


• How personal and global best positions influence the movement of particles.
• The ability of PSO to converge towards an optimal or near-optimal solution efficiently.

This experiment highlights how PSO can be applied to various continuous optimization problems
EXPERIMENT 5
Aim: To implement and analyze the Cuckoo Search algorithm for solving an optimization problem using
MATLAB.

Problem Statement: The optimization function used in this experiment is the Manhattan function,
defined as:
𝑛

𝑓(x) = ∑ |𝑥𝑖 |
𝑖=1

The goal is to find a vector 𝑥 that minimizes 𝑓(𝑥).

Software Used: MATLAB


Theory:
The Cuckoo Search (CS) algorithm is a metaheuristic inspired by the parasitic behavior of cuckoo birds. It
uses Lévy flights for exploration and replaces worse solutions with new ones. It's efficient for global
optimization problems, even with simple objective functions.
Steps of Cuckoo Search:
1. Initialize a population of nests with random solutions.
2. Evaluate the fitness of each solution using the Manhattan function.
3. Generate new solutions using Lévy flights.
4. Replace a randomly selected nest if the new solution is better.
5. Abandon a fraction of the worst nests and replace them.
6. Repeat until convergence or a maximum number of iterations is reached.
Hyperparameters used:
• Number of Nests: 25
• Dimensionality of Problem: 10
• Maximum Iterations: 100
• Discovery Rate (pa): 0.25
• Lévy Step Size (alpha): 0.01
• Search Space Bounds 𝑥𝑖 ∈ [−10,10]
Code:

% Parameters
numNests = 25;
numDimensions = 10;
maxIterations = 100;
pa = 0.25;
alpha = 0.01;
lowerBound = -10;
upperBound = 10;

% Objective Function: Manhattan (L1 norm)


fitnessFunction = @(x) sum(abs(x));

% Initialize nests
nests = lowerBound + (upperBound - lowerBound) * rand(numNests, numDimensions);
fitness = arrayfun(@(i) fitnessFunction(nests(i, :)), 1:numNests)';
[bestFitness, bestIndex] = min(fitness);
bestNest = nests(bestIndex, :);

% History
fitnessHistory = zeros(maxIterations, 1);

for iter = 1:maxIterations


for i = 1:numNests
% Generate a new solution using Levy flight
step = alpha * levyFlight(numDimensions);
newNest = nests(i, :) + step;

% Apply bounds
newNest = max(min(newNest, upperBound), lowerBound);

% Evaluate
newFitness = fitnessFunction(newNest);

% Replace if better
if newFitness < fitness(i)
nests(i, :) = newNest;
fitness(i) = newFitness;
end
end

% Abandon a fraction of the worst nests


abandon = rand(numNests, 1) < pa;
nests(abandon, :) = lowerBound + (upperBound - lowerBound) * rand(sum(abandon),
numDimensions);
fitness(abandon) = arrayfun(@(i) fitnessFunction(nests(i, :)), find(abandon))';
% Update best
[currentBestFitness, bestIndex] = min(fitness);
if currentBestFitness < bestFitness
bestFitness = currentBestFitness;
bestNest = nests(bestIndex, :);
end

fitnessHistory(iter) = bestFitness;
end

% Output
disp('Best solution found:');
disp(bestNest);
disp(['Best Manhattan Fitness: ', num2str(bestFitness)]);

% Plot convergence
figure;
plot(1:maxIterations, fitnessHistory, 'm-', 'LineWidth', 1.5);
xlabel('Iterations');
ylabel('Best Fitness');
title('Cuckoo Search - Manhattan Function Optimization');
grid on;

% --- Levy Flight Function ---


function step = levyFlight(d)
beta = 1.5;
sigma = (gamma(1+beta) * sin(pi*beta/2) / ...
(gamma((1+beta)/2) * beta * 2^((beta-1)/2)))^(1/beta);
u = randn(1, d) * sigma;
v = randn(1, d);
step = u ./ abs(v).^(1/beta);
end
Output:

Conclusion:

The implementation of the Cuckoo Search algorithm for the Manhattan function demonstrated:

• Fast convergence to the global minimum 𝑥 = 0


• Effectiveness of Lévy flight-based exploration in a simple convex space
• Simplicity and efficiency of Cuckoo Search in solving basic continuous optimization problems

This experiment validates how even simple functions like the Manhattan function can be used to observe
the core behavior of bio-inspired algorithms.

You might also like