R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
DESIGN AND ANALYSIS OF ALGORITHMS
UNIT II Divide and Conquer: General Method, Defective chessboard, Binary Search, finding the maximum
and minimum, Merge sort, Quick sort.
The Greedy Method: The general Method, container loading, knapsack problem, Job sequencing with
deadlines, minimum-cost spanning Trees.
DIVIDE AND CONQUER:
GENERAL METHOD:
1. Given a function to compute on n inputs the divide-and-conquer strategy suggests splitting the inputs into k
distinct subsets 1< k < n, yielding k sub problem
2. These sub problems must be solved, and then a method must be found to combine sub solutions into a
solution of the whole
3. .If the subproblems are still relatively large,then the divide-and-conquer strategy can possibly be
reapplied.Often the sub problems resulting from a divide-andconquer design are of the same type as the
original problem
4. For those cases the reapplication of the divide-and-conquer principle is naturally expressed by a recursive
algorithm
5. DAndC(P), where P is the problem to be solved
6. Small(P)is a Boolean-valued function that determines whether the input size is small enough that the answer
can be computed without splitting.If this is so,the function S is invoked. Otherwise the problem P is divided
into smaller sub problems.These sub problems P1,P2,P3,P4,………,Pk are solved by recursive applications of
DAndC. Combine is a function that determines the solution to P using the solutions to the k sub problem
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 1
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
7 . The function f(n) is the time for dividing P and combining the solutions to sub problem.
Advantages of Divide and Conquer
1. Divide and Conquer tend to successfully solve one of the biggest problems, such as the Tower of Hanoi, a
mathematical puzzle. It is challenging to solve complicated problems for which you have no basic idea, but
with the help of the divide and conquer approach, it has lessened the effort as it works on dividing the main
problem into two halves and then solve them recursively. This algorithm is much faster than other algorithms.
2. It efficiently uses cache memory without occupying much space because it solves simple subproblems within
the cache memory instead of accessing the slower main memory.
3. It is more proficient than that of its counterpart Brute Force technique.
Since these algorithms inhibit parallelism, it does not involve any modification and is handled by systems
incorporating parallel processing
Disadvantages of Divide and Conquer
1.Since most of its algorithms are designed by incorporating recursion, so it necessitates high memory
management.
2.An explicit stack may overuse the space.
3.It may even crash the system if the recursion is performed rigorously greater than the stack present in the
CPU.
DEFECTIVE CHESSBOARD
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 2
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 3
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 4
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 5
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
BINARY SEARCH
1. Searching is a process of finding a particular element among several given elements.
2. The search is successful if the required element is found.
3. Otherwise, the search is unsuccessful.
4. Binary Search is one of the fastest searching algorithms.
5. It is used for finding the location of an element in a linear array.
6. It works on the principle of divide and conquer technique.
7. Binary Search Algorithm can be applied only on Sorted arrays.
8. So, the elements must be arranged in-Either ascending order if the elements are numbers.
Or dictionary order if the elements are strings.
9.To apply binary search on an unsorted array,
First, sort the array using some sorting technique.
Then, use binary search algorithm.
Binary Search: Search a sorted array by repeatedly dividing the search interval in half. Begin with an
interval covering the whole array. If the value of the search key is less than the item in the middle of the
interval, narrow the interval to the lower half. Otherwise, narrow it to the upper half. Repeatedly check
until the value is found or the interval is empty.
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 6
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Binary Search Example-
Consider-
We are given the following sorted linear array.
Element 15 has to be searched in it using Binary Search Algorithm.
Binary Search Algorithm works in the following steps-
Step-01:
To begin with, we take beg=0 and end=6.
We compute location of the middle element as-
mid= (beg + end) / 2
= (0 + 6) / 2
=3
Here, a[mid] = a[3] = 20 ≠ 15 and beg < end.
So, we start next iteration.
Step-02:
Since a[mid] = 20 > 15, so we take end = mid – 1 = 3 – 1 = 2 whereas beg remains unchanged.
We compute location of the middle element as-
mid= (beg + end) / 2
= (0 + 2) / 2
=1
Here, a[mid] = a[1] = 10 ≠ 15 and beg < end.
So, we start next iteration.
Step-03:
Since a[mid] = 10 < 15, so we take beg = mid + 1 = 1 + 1 = 2 whereas end remains unchanged.
We compute location of the middle element as-
mid= (beg + end) / 2
= (2 + 2) / 2
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 7
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
=2
Here, a[mid] = a[2] = 15 which matches to the element being searched.
So, our search terminates in success and index 2 is returned.
Example 2:
Let us selectthe 14entries -15,-6, 0,7, 9, 23,54, 82,101,112,125,131,142,15
Time Complexity Analysis-
Binary Search time complexity analysis is done below-
In each iteration or in each recursive call, the search gets reduced to half of the array.
So for n elements in the array, there are log2n iterations or recursive calls.
Time Complexity of Binary Search Algorithm is O(log2n).
Here, n is the number of elements in the sorted linear array.
Binary Search Algorithm Advantages-
The advantages of binary search algorithm are-
1.It eliminates half of the list from further searching by using the result of each comparison.
2.It indicates whether the element being searched is before or after the current position in the list.
3.This information is used to narrow the search.
4.For large lists of data, it works significantly better than linear search.
Binary Search Algorithm Disadvantages-
1.The disadvantages of binary search algorithm are-
2.It employs recursive approach which requires more stack space.
3.Programming binary search algorithm is error prone and difficult.
4.The interaction of binary search with memory hierarchy i.e. caching is poor.
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 8
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
FINDING THE MAXIMUM AND MINIMUM:
The problem is to find the maximum and minimum items in a set of n elements.that can be solved by the divide
and-conquer technique
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 9
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 10
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 11
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
MERGE SORT
Merge Sort follows the rule of Divide and Conquer to sort a given set of numbers/elements, recursively, hence
consuming less time.
In Merge Sort, the given unsorted array with n elements is divided into n sub arrays, each
having one element, because a single element is always sorted in itself. Then, it repeatedly merges these sub
arrays, to produce new sorted sub arrays, and in the end, one complete sorted array is produced.
How Merge Sort Works?
As we have already discussed that merge sort utilizes divide-and-conquer rule to break the problem into sub-
problems, the problem in this case being, sorting a given array.In merge sort, we break the given array
midway, for example if the original array had 6 elements, then merge sort will break it down into two sub
arrays with 3 elements each.But breaking the original array into 2 smaller sub arrays is not helping us in sorting
the array.So we will break these sub arrays into even smaller sub arrays, until we have multiple sub arrays
with single element in them. Now, the idea here is that an array with a single element is already sorted, so once
we break the original array into sub arrays which has only a single element, we have successfully broken down
our problem into base problems.
And then we have to merge all these sorted sub arrays, step by step to form one single sorted array.Let's
consider an array with values {14, 7, 3, 12, 9, 11, 6, 12}
Below, we have a pictorial representation of how merge sort will sort the given array.
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 12
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
In merge sort we follow the following steps:
1. We take a variable p and store the starting index of our array in this. And we take another variable r and
store the last index of array in it.
2. Then we find the middle of the array using the formula (p + r)/2 and mark the middle index as q, and
break the array into two subarrays, from p to q and from q + 1 to r index.
3. Then we divide these 2 subarrays again, just like we divided our main array and this continues.
4. Once we have divided the main array into subarrays with single elements, then we start merging the
subarrays.
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 13
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
COMPLEXITY ANALYSIS OF MERGE SORT
Merge Sort is quite fast, and has a time complexity of O(n*log n). It is also a stable sort, which means the
"equal" elements are ordered in the same order in the sorted list.
In this section we will understand why the running time for merge sort is O(n*log n).
As we have already learned in Binary Search that whenever we divide a number into half in every step, it can
be represented using a logarithmic function, which is log n and the number of steps can be represented by log n
+ 1(at most)
Also, we perform a single step operation to find out the middle of any subarray, i.e. O(1).
And to merge the sub arrays, made by dividing the original array of n elements, a running time of O(n) will be
required.
Hence the total time for merge Sort function will become n(log n + 1), which gives us a time complexity
of O(n*log n).
Worst Case Time Complexity [ Big-O ]: O(n*log n)
Best Case Time Complexity [Big-omega]: O(n*log n)
Average Time Complexity [Big-theta]: O(n*log n)
Space Complexity: O(n)
Time complexity of Merge Sort is O(n*Log n) in all the 3 cases (worst, average and best) as merge sort
always divides the array in two halves and takes linear time to merge two halves.
It requires equal amount of additional space as the unsorted array. Hence its not at all recommended
for searching large unsorted arrays.
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 14
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
QUICK SORT
1. Quick Sort is a famous sorting algorithm.
2. It sorts the given data items in ascending order.
3. It uses the idea of divide and conquer approach.
4. It follows a recursive algorithm.
How Does Quick Sort Works?
· Quick Sort follows a recursive algorithm.
· It divides the given array into two sections using a partitioning element called as pivot.
The division performed is such that-
· All the elements to the left side of pivot are smaller than pivot.
· All the elements to the right side of pivot are greater than pivot.
After dividing the array into two sections, the pivot is set at its correct position.
Then, sub arrays are sorted separately by applying quick sort algorithm recursively
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 15
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Quick Sort Example-
Consider the following array has to be sorted in ascending order using quick sort algorithm-
Step-01:
Initially-
· Left and Loc (pivot) points to the first element of the array.
· Right points to the last element of the array.
So to begin with, we set loc = 0, left = 0 and right = 5 as-
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 16
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Step-02:
Since loc points at left, so algorithm starts from right and move towards left.
As a[loc] < a[right], so algorithm moves right one position towards left as-
Now, loc = 0, left = 0 and right = 4.
Step-03:
Since loc points at left, so algorithm starts from right and move towards left.
As a[loc] > a[right], so algorithm swaps a[loc] and a[right] and loc points at right as-
Now, loc = 4, left = 0 and right = 4.
Step-04:
Since loc points at right, so algorithm starts from left and move towards right.
As a[loc] > a[left], so algorithm moves left one position towards right as-
Now, loc = 4, left = 1 and right = 4.
Step-05:
Since loc points at right, so algorithm starts from left and move towards right.
As a[loc] > a[left], so algorithm moves left one position towards right as-
Now, loc = 4, left = 2 and right = 4
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 17
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Step-06:
Since loc points at right, so algorithm starts from left and move towards right.
As a[loc] < a[left], so we algorithm swaps a[loc] and a[left] and loc points at left as-
Now, loc = 2, left = 2 and right = 4.
Step-07:
Since loc points at left, so algorithm starts from right and move towards left.
As a[loc] < a[right], so algorithm moves right one position towards left as-
Now, loc = 2, left = 2 and right = 3.
Step-08:
Since loc points at left, so algorithm starts from right and move towards left.
As a[loc] > a[right], so algorithm swaps a[loc] and a[right] and loc points at right as-
Now, loc = 3, left = 2 and right = 3.
Step-09:
Since loc points at right, so algorithm starts from left and move towards right.
As a[loc] > a[left], so algorithm moves left one position towards right as-
Now, loc = 3, left = 3 and right = 3.
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 18
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Now,
· loc, left and right points at the same element.
· This indicates the termination of procedure.
· The pivot element 25 is placed in its final position.
· All elements to the right side of element 25 are greater than it.
· All elements to the left side of element 25 are smaller than it.
Now, quick sort algorithm is applied on the left and right sub arrays separately in the similar manner.
Quick Sort Analysis-
· To find the location of an element that splits the array into two parts, O(n) operations are required.
· This is because every element in the array is compared to the partitioning element.
· After the division, each section is examined separately.
· If the array is split approximately in half (which is not usually), then there will be log2n splits.
· Therefore, total comparisons required are f(n) = n x log2n = O(nlog2n).
Order of Quick Sort = O(nlog2n)
Worst Case-
· Quick Sort is sensitive to the order of input data.
· It gives the worst performance when elements are already in the ascending order.
· It then divides the array into sections of 1 and (n-1) elements in each call.
· Then, there are (n-1) divisions in all.
· Therefore, here total comparisons required are f(n) = n x (n-1) = O(n2).
Order of Quick Sort in worst case = O(n2)
Advantages of Quick Sort-
The advantages of quick sort algorithm are-
· Quick Sort is an in-place sort, so it requires no temporary memory.
· Quick Sort is typically faster than other algorithms.
(because its inner loop can be efficiently implemented on most architectures)
· Quick Sort tends to make excellent usage of the memory hierarchy like virtual memory or caches.
· Quick Sort can be easily parallelized due to its divide and conquer nature.
Disadvantages of Quick Sort-
The disadvantages of quick sort algorithm are-
· The worst case complexity of quick sort is O(n2).
· This complexity is worse than O(nlogn) worst case complexity of algorithms like merge sort, heap sort etc.
· It is not a stable sort i.e. the order of equal elements may not be preserved.
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 19
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
THE GREEDY METHOD
1.The greedy method is the most straightforward design technique it can be applied to a wide variety of
problems. All of these problems have n inputs and require us to obtain a sub set that satisfies some
constraints.
2. Any sub set that satisfies those constraints is called a feasible solution.
3. We need to find a feasible solution that either maximizes or minimizes a given objective function.
4. A feasible solution that does this is called an optimal solution.
5. The greedy method works in stages, considering one input at a time.
6. At each stage, a decision is made regarding whether a particular input is in an optimal solution.
7. This is done by considering the inputs in an order determined by some selection procedure. If the inclusion
of the next input into the partially constructed optimal solution will result in an infeasible solution, then this
input is not added to the partial solution.
8. They are easy to invent, easy to implement and most of the time quite efficient.
Many problems cannot be solved correctly by greedy approach. Greedy algorithms are used to solve
optimization problems.
There are broadly 3 ways to solve optimization problems:
1. The greedy method
2. Dynamic programming
3. Branch and bound technique
Applications of Greedy Algorithm:
. Knapsack problem
. Job sequencing with Deadlines
. Minimum cost spanning trees
. Container Loading
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 20
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 21
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 22
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
KNAPSACK PROBLEM:
1. A knapsack (kind of shoulder bag) with limited weight capacity.
2. Few items each having some weight and profit .
3. The problem states:
Which items should be placed into the knapsack such that-
The value or profit obtained by putting the items into the knapsack is maximum
And the weight limit of the knapsack does not exceed
Knapsack Problem Variants
Knapsack problem has the following two variants
1. Fractional Knapsack Problem
2. 0/1 Knapsack Problem
1. Fractional Knapsack Problem
1. In Fractional Knapsack Problem, as the name suggests, items are divisible here.
2. We can even put the fraction of any item into the knapsack if taking the complete item is not possible.
Fractional knapsack problem is solved using greedy method in the following steps
Step-01:
For each item, compute its profit / weight ratio.
step-02:
Arrange all the items in decreasing order of their profit / weight ratio.
Step-03:
Start putting the items into the knapsack beginning from the item with the highest ratio.
Put as many items as you can into the knapsack.
Let’s implement the algorithm with the following example
m=15 , n=7
Objects : 1 2 3 4 5 6 7
Profits : 10 5 15 7 6 18 3
Weights : 2 3 5 7 1 4 1
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 23
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Here object5 contains max value of profit and weight ratio(p/w) which is 6.
Step1: Add object 5 in the stack and remove its weight from the total weight of knapsack m=15–1=14. and
increase the profit, so P=0+6=6
Step2 : Add object 1 into stack, whose p/w=5, then m= 14-2=12. now total profit will be p=6+10=16
Step3 : similarly add other objects as shown in the above image till knapsack size become zero i.e; m=0.
Step4: When m=0 our profit will be like P=6+10+18+15+3+5*(2/3) = 55.3 (total profit).
Note: Greedy Technique is only feasible in fractional knapSack. where we can divide the entity into
fraction . But for 0/1 knapsack we have to go Dynamic Programming. As in 0/1 knapsack we could either
take the item or not take the item.
Time complexity:
The main time taking step is the sorting of all items in decreasing order of their value / weight ratio.
If the items are already arranged in the required order, then while loop takes O(n) time.
The average time complexity of Quick Sort is O(nlogn).
Therefore, total time taken including the sort is O(nlogn).
JOB SEQUENCING WITH DEADLINES
1.The sequencing of jobs on a single processor with deadline constraints is called as Job Sequencing with
Deadlines.
Here-
.You are given a set of jobs.
.Each job has a defined deadline and some profit associated with it.
.The profit of a job is given only when that job is completed within its deadline.
.Only one processor is available for processing all the jobs.
.Processor takes one unit of time to complete a job.
The problem states-
“How can the total profit be maximized if only one job can be completed at a time?”
Approach to solution:
A feasible solution would be a subset of jobs where each job of the subset completes within its deadline.
Value of the feasible solution would be the sum of profit of all the jobs contained in the subset.
An optimal solution of the problem would be a feasible solution which gives the maximum profit.
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 24
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Step-01:
.Sort all the given jobs in decreasing order of their profit.
Step-02:
.Check the value of maximum deadline.
.Draw a Gantt chart where maximum time on Gantt chart is the value of maximum deadline.
Step-03:
.Pick up the jobs one by one.
.Put the job on Gantt chart as far as possible from 0 ensuring that the job gets completed before its deadline.
Solution:-
Step-1:-
Sort all the given jobs in the decreasing order of their profits
Step-2:-
Value of maximum deadline = 3
So, draw a Gantt chart with maximum time on Gantt chart = 3 units as shown
Step-3:-
Now, we will take each job one by one in the order they appear in step-1 and place them on Gantt chart as far
as possible from 0 (starting point)
First, we take job J1.
Since, its deadline is 2, so we place it in the first empty cell before deadline 2 as
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 25
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Now, we take job J2.
Since, its deadline is 2, so we place it in the first empty cell before deadline 2.
Since, the second cell is already filled, so we place job J2 in the first cell as
Now, we take job J5.
Since, its deadline is 3, so we place it in the first empty cell before deadline 3 as
.The jobs left are job J4 and J3 whose deadline is 3 and 1.
.All the slots before deadline 3 and 1 are already occupied.
.Thus, jobs J4 and J3 can not be completed.
.The optimal schedule is- J2 , J1 , J5
.This is the required order in which the jobs must be completed in order to obtain the maximum profit.
Maximum earned profit
= Sum of profits of all jobs in optimal schedule
= Profit of job J2 + Profit of job J1 + Profit of job J5
= 15 + 20 + 10
= 45 units
MINIMUM COST SPANNING TREES(MST):
A spanning tree is a subset of an undirected Graph that has all the vertices(n) connected by minimum number
of edges(n-1).
If all the vertices are connected in a graph, then there exists at least one spanning tree. In a graph, there may
exist more than one spanning tree.
Properties:
A spanning tree does not have any cycle.
Any vertex can be reached from any other vertex.
Example
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 26
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
In this context, if each edge of the graph is associated with a weight/cost and there exists more than one
spanning tree.
Minimum cost Spanning Tree is a Spanning Tree which has minimum total cost.
The cost of spanning tree would be the sum of the cost of its edges.
To derive an MST, Prim’s algorithm or Kruskal’s algorithm can be used.
PRIM’S ALGORITHM
The implementation of Prim’s Algorithm
Step-01:
Randomly choose any vertex.
The vertex connecting to the edge having least weight is usually selected.
Step-02:
Find all the edges that connect the tree to new vertices.
Find the least weight edge among those edges and include it in the existing tree.
If including that edge creates a cycle, then reject that edge and look for the next least weight edge.
Step-03:
Keep repeating step-02 until all the vertices are included and Minimum Spanning Tree (MST) is obtained.
Example:
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 27
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Since all the vertices have been included in the MST, so we stop.
Now, Cost of Minimum Spanning Tree = Sum of all edge weights
= 10 + 25 + 22 + 12 + 16 + 14
= 99 units
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 28
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
KRUSKAL’S ALGORITHM
The implementation of Kruskal’s Algorithm is explained in the following steps-
Step-01:
Sort all the edges from low weight to high weight.
Step-02:
Take the edge with the lowest weight and use it to connect the vertices of graph.
If adding an edge creates a cycle, then reject that edge and go for the next least weight edge.
Step-03:
Keep adding edges until all the vertices are connected and a Minimum Spanning Tree (MST) is obtained.
Example:
Solution
To construct MST using Kruskal’s Algorithm,
Simply draw all the vertices on the paper.
Connect these vertices using edges with minimum weights such that no cycle gets formed.
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 29
R19 IT III YEAR 1 SEMESTER DESIGN AND ANALYSIS OF ALGORITHMS UNIT II
Since all the vertices have been connected / included in the MST, so we stop.
Weight of the MST = Sum of all edge weights
= 10 + 25 + 22 + 12 + 16 + 14
= 99 units
Y.Jyothi , Assistant Professor, IT Department, Dhanekula Institute of Engineering & Technology 30