0% found this document useful (0 votes)
52 views21 pages

Greedy Method and Algorithms Guide

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views21 pages

Greedy Method and Algorithms Guide

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

UNIT-IV

GREEDY METHOD

The general method of Greedy


Greedy Method:
Following are a few points about the greedy method.
 The first note point is that we have to find the best method/option out of many present
ways.
 In this method, we will be deciding the output by focusing on the first stage. We don’t
think about the future.
 The greedy method may or may not give the best output.
A greedy Algorithm solves problems by making the choice that seems to be the best
at that particular moment. There are many optimization problems that can be
determined using a greedy algorithm. A greedy algorithm may provide a solution
that is close to optimal to some issues that might have no efficient solution. A
greedy algorithm works if a problem has the following two properties:
1. Greedy Choice Property: By creating a locally optimal solution we can reach a globally
optimal solution. In other words, by making “greedy” choices we can obtain an optimal
solution.
2. Optimal substructure: Optimal solutions will always contain optimal subsolutions. In
other words, we say that the answers to subproblems of an optimal solution are optimal.

Examples:
Following are a few examples of Greedy algorithms
 Machine scheduling
 Fractional Knapsack Problem
 Minimum Spanning Tree
 Huffman Code
 Job Sequencing
 Activity Selection Problem

Components of Greedy Algorithm


Greedy algorithms consist of the following five components −
 A candidate set − A solution is created with the help of a candidate set.
 A selection function − It is used to choose the best candidate that is to be added to the
solution.
 A feasibility function − A feasibility function is useful in determining whether a candidate
can be used to contribute to the solution or not.
 An objective function − This is used to assign a value to a solution or a partial solution.
 A solution function − A solution function is used to indicate whether a complete solution
has been reached or not.
Areas of Application
The greedy approach is used to solve many problems. Out of all the problems, here
we have a few of them as follows:
 One of the applications could be finding the shortest path between two vertices
using Dijkstra’s algorithm.
 Another is finding the minimal spanning tree in a graph using Prim’s /Kruskal’s
algorithm

Greedy Algorithm:
getOptimal(Item, array[], int num)
Initialize empty result as, result = {}
While (All items are not considered)

// In order to select an item we make a greedy here.


j = SelectAnItem()

// If j is feasible, add j to the result


if (feasible(j))
result = result U j
return result

Why should we choose a greedy algorithm?

The greedy approach has a few characteristics that might make it suitable for
optimization. One reason to choose a greedy algorithm is to achieve the most
feasible solution immediately.
If we take the activity selection problem as a reference that is explained below. We
can observe that if more activities can be done before finishing the current activity,
then these activities can be performed within the same time.
Another reason to choose this algorithm is to divide a problem recursively based on
a condition. This ensures that there is no need to combine all the solutions.
While considering the activity selection problem, we achieve the “recursive division”
step by scanning a list of items only once and considering certain activities.

Advantages

 It is easy to implement.
 Has fewer time complexities.
 Can be used for the purpose of optimization or finding close to optimization in the case
of NP-Hard problems.
Disadvantages

 One disadvantage of this algorithm is that the local optimal solution may not always be
globally optimal.
Job Sequencing with Deadlines
Job Sequencing with Deadlines problem uses the greedy approach. So we have to
find the best method/option in the greedy method out of many present ways. In this
method/ approach, we focus on the first stage, decide the output, and don't think about
the future.

Many optimization problems can be determined using the greedy algorithm. Some
issues have no efficient solution, but the greedy algorithm may provide a solution close
to optimal. Here, we will briefly see the Job sequencing with deadlines definition,
algorithm, and example.

What is Job Sequencing with Deadlines?


In a job sequencing with deadlines problem, the objective is to find a sequence of jobs
completed within their deadlines, giving a maximum profit. Let us consider the set of n
given jobs associated with deadlines, and profit is earned if a job is completed by its
deadline. These jobs need to be ordered so that the maximum profit is earned.

It may happen that all the given jobs can not be completed within their deadlines.
Assume that the deadline of ith job Ji is di and the profit received from job Ji is pi. Hence,
the optimal solution of job sequencing with deadlines algorithm is a feasible solution with
maximum profit.

Points to Remember for Job Sequencing with Deadlines

 Each job has deadline di & it can process the job within its deadline; only one job
can be processed at a time.
 Only one CPU is available for processing all jobs.
 CPU can take only one unit at a time for processing any job.
 All jobs arrived at the same time.

Job Sequencing with Deadlines Algorithm


Suppose there are jobs J(i) with the deadline D(i) and profit P(i) where 0≤i≤1. These
jobs are ordered according to profit p1⩾p2⩾p3⩾...⩾pn.

Job-Sequencing-With-Deadline (D, J, n, k)

D(0) := J(0) := 0

k:= 1

J(1) := 1 // means first job is selected

for i = 2 … n do
r:= k

while D(J(r)) > D(i) and D(J(r)) ≠ r do

r:= r – 1

if D(J(r)) ≤ D(i) and D(i) > r then

for l = k … r + 1 by -1 do

J(l + 1): = J(l)

J(r + 1): = i

k:= k + 1

Job Sequencing with Deadlines Example


Let us consider a given job sequencing problem, as shown in the table below. We have
to find the sequence of jobs completed within their deadlines and give the maximum
profit. Each job is associated with the deadline and profit as given below:

Job J1 J2 J3 J4 J5
Deadline 2 1 1 2 3
Profit 40 100 20 60 20

The given jobs are sorted as per their profit in descending order to solve this problem.
Hence, the jobs are ordered after sorting, as shown in the following table.

Job J2 J4 J1 J5 J3
Deadline 1 2 2 3 1
Profit 100 60 40 20 20

From the given set of jobs, first, we select J2, as it should be completed within its
deadline and contributes maximum profit.

 Next, J4 is selected as it gives more profit than J1.


 J1 cannot be selected in the next clock as its deadline is over. Hence J5 is
selected as it executes within its deadline.
 Job J3 is discarded as it should not be executed within its deadline.

Therefore, the sequence of jobs (J2, J4, J5) is executed within their deadline and gives
the maximum profit.

The total profit of the sequence is 100 + 60 + 20 = 180.


Minimum Spanning Tree
Before knowing about the minimum spanning tree, we should know about the spanning tree.

To understand the concept of spanning tree, consider the below graph:

The above graph can be represented as G(V, E), where 'V' is the number of vertices,
and 'E' is the number of edges. The spanning tree of the above graph would be
represented as G`(V`, E`). In this case, V` = V means that the number of vertices in the
spanning tree would be the same as the number of vertices in the graph, but the
number of edges would be different. The number of edges in the spanning tree is the
subset of the number of edges in the original graph. Therefore, the number of edges
can be written as:

E` € E

It can also be written as:

E` = |V| - 1

Two conditions exist in the spanning tree, which is as follows:

o The number of vertices in the spanning tree would be the same as the number of
vertices in the original graph.
V` = V
o The number of edges in the spanning tree would be equal to the number of
edges minus 1.
E` = |V| - 1
o The spanning tree should not contain any cycle.
o The spanning tree should not be disconnected.
Consider the below graph:

The above graph contains 5 vertices. As we know, the vertices in the spanning tree
would be the same as the graph; therefore, V` is equal 5. The number of edges in the
spanning tree would be equal to (5 - 1), i.e., 4. The following are the possible spanning
trees:
What is a minimum spanning tree?

The minimum spanning tree is a spanning tree whose sum of the edges is minimum.
Consider the below graph that contains the edge weight:
The following are the spanning trees that we can make from the above graph.

o The first spanning tree is a tree in which we have removed the edge between the
vertices 1 and 5 shown as below:
The sum of the edges of the above tree is (1 + 4 + 5 + 2): 12
o The second spanning tree is a tree in which we have removed the edge between
the vertices 1 and 2 shown as below:
The sum of the edges of the above tree is (3 + 2 + 5 + 4) : 14
o The third spanning tree is a tree in which we have removed the edge between the
vertices 2 and 3 shown as below:
The sum of the edges of the above tree is (1 + 3 + 2 + 5) : 11
o The fourth spanning tree is a tree in which we have removed the edge between
the vertices 3 and 4 shown as below:
The sum of the edges of the above tree is (1 + 3 + 2 + 4) : 10. The edge cost 10 is
minimum so it is a minimum spanning tree.

General properties of minimum spanning tree :

o If we remove any edge from the spanning tree, then it becomes disconnected.
Therefore, we cannot remove any edge from the spanning tree.
o If we add an edge to the spanning tree then it creates a loop. Therefore, we
cannot add any edge to the spanning tree.
o In a graph, each edge has a distinct weight, then there exists only a single and
unique minimum spanning tree. If the edge weight is not distinct, then there can
be more than one minimum spanning tree.
o A complete undirected graph can have an n n-2 number of spanning trees.
o Every connected and undirected graph contains atleast one spanning tree.
o The disconnected graph does not have any spanning tree.
o In a complete graph, we can remove maximum (e-n+1) edges to construct a
spanning tree.

Let's understand the last property through an example.

Consider the complete graph which is given below:


The number of spanning trees that can be made from the above complete graph equals to nn-2 =
44-2 = 16.

Therefore, 16 spanning trees can be created from the above graph.

The maximum number of edges that can be removed to construct a spanning tree equals to e-
n+1 = 6 - 4 + 1 = 3.

MST using Kruskal’s algorithm

Below are the steps for finding MST using Kruskal’s algorithm:
1. Sort all the edges in non-decreasing order of their weight.
2. Pick the smallest edge. Check if it forms a cycle with the spanning tree formed so far. If the
cycle is not formed, include this edge. Else, discard it.
3. Repeat step#2 until there are (V-1) edges in the spanning tree.
Step #2 uses the Union-Find algorithm to detect cycles.
So we recommend reading the following post as a prerequisite.
Kruskal’s algorithm to find the minimum cost spanning tree uses the greedy approach. The
Greedy Choice is to pick the smallest weight edge that does not cause a cycle in the MST
constructed so far. Let us understand it with an example:
Below is the illustration of the above approach:

Input Graph:
The graph contains 9 vertices and 14 edges. So, the minimum spanning tree formed will be having (9 –
1) = 8 edges.
After sorting:
Weight Src Dest
1 7 6
2 8 2
2 6 5
4 0 1
4 2 5
6 8 6
7 2 3
7 7 8
8 0 7
8 1 2
9 3 4
10 5 4
11 1 7
14 3 5
Now pick all edges one by one from the sorted list of edges
Step 1: Pick edge 7-6: No cycle is formed, include it.

Step 2: Pick edge 8-2: No cycle is formed, include it.


Step 3: Pick edge 6-5: No cycle is formed, include it.

Step 4: Pick edge 0-1: No cycle is formed, include it.

Step 5: Pick edge 2-5: No cycle is formed, include it.


Step 6: Pick edge 8-6: Since including this edge results in the cycle, discard it.
Step 7: Pick edge 2-3: No cycle is formed, include it.

Step 8: Pick edge 7-8: Since including this edge results in the cycle, discard it.
Step 9: Pick edge 0-7: No cycle is formed, include it.

Step 10: Pick edge 1-2: Since including this edge results in the cycle, discard it.
Step 11: Pick edge 3-4: No cycle is formed, include it.

Note: Since the number of edges included in the MST equals to (V – 1), so the
algorithm stops here.

PRIM’S ALGORITHM
Prim's algorithm to find minimum cost spanning tree (as Kruskal's algorithm) uses the
greedy approach. Prim's algorithm shares a similarity with the shortest path
first algorithms.
Prim's algorithm, in contrast with Kruskal's algorithm, treats the nodes as a single tree
and keeps on adding new nodes to the spanning tree from the given graph.
To contrast with Kruskal's algorithm and to understand Prim's algorithm better, we shall
use the same example −

Step 1 - Remove all loops and parallel edges

Remove all loops and parallel edges from the given graph. In case of parallel edges,
keep the one which has the least cost associated and remove all others.
Step 2 - Choose any arbitrary node as root node

In this case, we choose S node as the root node of Prim's spanning tree. This node is
arbitrarily chosen, so any node can be the root node. One may wonder why any video
can be a root node. So the answer is, in the spanning tree all the nodes of a graph are
included and because it is connected then there must be at least one edge, which will
join it to the rest of the tree.

Step 3 - Check outgoing edges and select the one with less cost

After choosing the root node S, we see that S,A and S,C are two edges with weight 7
and 8, respectively. We choose the edge S,A as it is lesser than the other.

Now, the tree S-7-A is treated as one node and we check for all edges going out from it.
We select the one which has the lowest cost and include it in the tree.

After this step, S-7-A-3-C tree is formed. Now we'll again treat it as a node and will
check all the edges again. However, we will choose only the least cost edge. In this
case, C-3-D is the new edge, which is less than other edges' cost 8, 6, 4, etc.

After adding node D to the spanning tree, we now have two edges going out of it having
the same cost, i.e. D-2-T and D-2-B. Thus, we can add either one. But the next step will
again yield edge 2 as the least cost. Hence, we are showing a spanning tree with both
edges included.
Knapsack Problem:

Firstly, we have given a knapsack of the maximum capacity of m kg and n items with
their weight and profit. Fill in the knapsack with a subset of items such that the selected
weight is less than or equal to the capacity of the knapsack and the profit of items is
maximum.

Algorithm of solving Knapsack Problem using


Greedy Method:
 Assume a knapsack of capacity M and n items having profit pi and weight wi
 Sort items by profit/weight ratio: pi/wi
 Consider items in order of decreasing ratio
 Take as much of each item as possible.
 Traverse this sorted list as:
if(wi ≤ M)
{
Xi=1;
M=M-wi;
}
else if (wi > M && M>0)
{
Xi=M/wi;
M=0;
}
else
{
Xi=0;
}

Example:-
Let us consider that the capacity of the knapsack M=60 and the list of provided items
are shown in the following table-

However, the provided items are not sorted based on (pi/wi). After sorting, the items are
shown in the following table.

Solution:-
Afterward, sorting all the items according to (pi/wi), first item B is chosen as the weight
of B is less than the capacity of the knapsack. Next, item A is chosen, as the available
capacity of the knapsack is greater than the weight of A. Then, C is chosen as the next
item.
However, the whole item cannot be chosen as the remaining capacity of the knapsack
is less than the weight of C.
Hence, fraction of C (i.e.(60-50)/20)is chosen.
Chiefly, the capacity of the knapsack is equal to the selected items. Hence, no more
items can be selected.
So, the total weight of the selected items is 10+40+20*(10/20)=60
Also, the total profit is 100+280+120*(10/20)=380+60=440
Hence, this is the optimal solution. Moreover, we cannot gain more profit by selecting
any different combination of items.

We may find that the output spanning tree of the same graph using two different
algorithms is same.

Dijkstra’s Shortest Path Algorithm


Given a graph and a source vertex in the graph, find the shortest paths from the
source to all vertices in the given graph.
Examples:
Input: src = 0, the graph is shown below.

Output: 0 4 12 19 21 11 9 8 14
Explanation: The distance from 0 to 1 = 4.
The minimum distance from 0 to 2 = 12. 0->1->2
The minimum distance from 0 to 3 = 19. 0->1->2->3
The minimum distance from 0 to 4 = 21. 0->7->6->5->4
The minimum distance from 0 to 5 = 11. 0->7->6->5
The minimum distance from 0 to 6 = 9. 0->7->6
The minimum distance from 0 to 7 = 8. 0->7
The minimum distance from 0 to 8 = 14. 0->1->2->8
Implementing Dijkstra Algorithm

Dijkstra shortest path algorithm using Prim’s Algorithm in O(V 2):

Dijkstra’s algorithm is very similar to Prim’s algorithm for minimum spanning tree.
Like Prim’s MST, generate a SPT (shortest path tree) with a given source as a root.
Maintain two sets, one set contains vertices included in the shortest-path tree, other
set includes vertices not yet included in the shortest-path tree. At every step of the
algorithm, find a vertex that is in the other set (set not yet included) and has a
minimum distance from the source.
Follow the steps below to solve the problem:
 Create a set sptSet (shortest path tree set) that keeps track of vertices included in
the shortest-path tree, i.e., whose minimum distance from the source is calculated
and finalized. Initially, this set is empty.
 Assign a distance value to all vertices in the input graph. Initialize all distance
values as INFINITE. Assign the distance value as 0 for the source vertex so that it
is picked first.
 While sptSet doesn’t include all vertices
 Pick a vertex u which is not there in sptSet and has a minimum distance
value.
 Include u to sptSet.
 Then update distance value of all adjacent vertices of u.
 To update the distance values, iterate through all adjacent
vertices.
 For every adjacent vertex v, if the sum of the distance value of u
(from source) and weight of edge u-v, is less than the distance
value of v, then update the distance value of v.
Note: We use a boolean array sptSet[] to represent the set of vertices included in
SPT. If a value sptSet[v] is true, then vertex v is included in SPT, otherwise not. Array
dist[] is used to store the shortest distance values of all vertices.
Below is the illustration of the above approach:

Illustration:

To understand the Dijkstra’s Algorithm lets take a graph and find the shortest path
from source to all nodes.
Consider below graph and src = 0
Step 1:
 The set sptSet is initially empty and distances assigned to vertices are {0, INF, INF,
INF, INF, INF, INF, INF} where INF indicates infinite.
 Now pick the vertex with a minimum distance value. The vertex 0 is picked, include
it in sptSet. So sptSet becomes {0}. After including 0 to sptSet, update distance
values of its adjacent vertices.
 Adjacent vertices of 0 are 1 and 7. The distance values of 1 and 7 are updated as 4
and 8.
The following subgraph shows vertices and their distance values, only the vertices
with finite distance values are shown. The vertices included in SPT are shown
in green colour.

Step 2:
 Pick the vertex with minimum distance value and not already included in SPT (not
in sptSET). The vertex 1 is picked and added to sptSet.
 So sptSet now becomes {0, 1}. Update the distance values of adjacent vertices of
1.
 The distance value of vertex 2 becomes 12.

Step 3:
 Pick the vertex with minimum distance value and not already included in SPT (not
in sptSET). Vertex 7 is picked. So sptSet now becomes {0, 1, 7}.
 Update the distance values of adjacent vertices of 7. The distance value of vertex 6
and 8 becomes finite (15 and 9 respectively).

Step 4:
 Pick the vertex with minimum distance value and not already included in SPT (not
in sptSET). Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}.

 Update the distance values of adjacent vertices of 6. The distance value of vertex 5
and 8 are updated.

We repeat the above steps until sptSet includes all vertices of the given graph.
Finally, we get the following Shortest Path Tree (SPT).

You might also like