0% found this document useful (0 votes)
66 views36 pages

Ada 2

The document provides an overview of greedy strategies in algorithm design, highlighting their characteristics, advantages, and limitations. It details various algorithms that utilize greedy approaches, such as Prim's and Kruskal's algorithms for minimum spanning trees, Huffman coding for data compression, and the fractional knapsack and job sequencing problems. Each algorithm is explained with definitions, step-by-step processes, and examples to illustrate their applications and effectiveness.

Uploaded by

Adarsh Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views36 pages

Ada 2

The document provides an overview of greedy strategies in algorithm design, highlighting their characteristics, advantages, and limitations. It details various algorithms that utilize greedy approaches, such as Prim's and Kruskal's algorithms for minimum spanning trees, Huffman coding for data compression, and the fractional knapsack and job sequencing problems. Each algorithm is explained with definitions, step-by-step processes, and examples to illustrate their applications and effectiveness.

Uploaded by

Adarsh Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Unit-II Study of Greedy Strategies

Subject Code:-AL-402
Subject :- Analysis of Design & Algorithm
By:- Adarsh Raushan
Asst. Professor
CSE Dept.,LNCT, Bhopal
Introduction
► Greedy Strategies are algorithms or problem-solving
methods that make the locally optimal choice at each
stage with hope of finding a global optimum solution. In
other words, they make decisions based solely on the
information available at the current step, without
considering future consequences.
► While greedy algorithms are easy to implement
computationally efficient, they do not guarantee the best
solution in every case. Sometimes, a globally optimal
solution requires considering future steps and making
sacrifices in the short term. However ,greedy algorithms
exhibits greedy choice property meaning that making a
locally optimal choice consistently leads to a globally
optimal solution.
Introduction
► Definition:- A greedy algorithm is an approach for
solving a problem by selecting the best options available
at the moment. It doesn’t worry whether the current best
result will bring the overall optimal result.
► The algorithm never reverses the earlier decision even if
the choice is wrong. It works in a top-down approach.
► The algorithm may not produce the best result for all the
problems. It is because it always goes for the local best
choice to produce the best result.
Introduction
► However we can determine if the algorithm can be used
with any problem if the problem has the following
properties:-
1. Greedy Choice Property:- If an optional solution to the
problem can be found by choosing the best choice at each step
without reconsidering the previous steps once chosen problem
can be solved using greedy approach. This property is called
Greedy Choice Property.
2. Optimal Substructure:- If the optimal solution to the problem
corresponds to the optimal solution to its sub-problems , then
the problem can be solved using a greedy approach. This
property is called Optimal Substructure.
ALGORITHMS FOR GREEDY APPROACH

Optimal Merge Pattern

Hufffman Coding Algorithm


Greedy Approach

Prim’s Algorithm
Algorithms

Kruskal’s Algorithm

Fracktional Knapsack Problem

Job Sequencing Problem

Dijkstra’s Algorithm

Bellman-Ford Algorithm
Optimal Merge Pattern
► Definition:- Optimal Merge Pattern is a classic problem in
computer science , specifically in the realm of algorithm design
. Its often used in the context of external sorting or file
merging.
► Problem Statement:- Given a set of files with varying sizes ,
the goal is to merge them into a single file with minimum total
access time. Access time here typically refers to time required
to access the data from storage, which can include such as seek
time and transfer time.
► Solution Approach:- The optimal merge pattern problem is
typically solved using greedy approach. The basic idea is to
repeatedly merge the two smallest files into a large file until
only one remains.
Optimal Merge Pattern
► Step-by-step approach:
1. Sort the files by sizes:-Arrange all the files in ascending order
based on their sizes.
2. Merge the smallest two files :- Take the two smallest files and
merge them into a single file.
3. Update the list of files :- Replace the two merged files with the
resulting merged files in list of files.
4. Repeat steps 2 and 3 until only one file remains.
Optimal Merge Pattern:- Example
Let us consider the given files, f1, f2, f3, f4 and f5 with 20, 30, 10, 5 and
30 number of elements respectively.
Initial Set
Step-3

Step-1

Step-2
Step-4

Hence, the solution takes 15 + 35 + 60 + 95 = 205


number of comparisons.
Huffman Coding Algorithm
► Definition:-Huffman Coding Algorithm is a popular method
for lossless data compression. It creates a variable-length prefix
code based on the frequency of each character in the input text.
Huffman Coding is widely used in applications like data
compression (zip files) , telecommunication( fax machines),
and even in image compression(JPEG).
Working of Huffman Coding Algorithm:-
1. Frequency Calculation :- The first step is to calculate
frequency of each character in the input text. This step is
essential because Huffman Coding relies on the frequency of
character to construct the encoding tree.
2. Build Huffman Tree:- Once the frequency of each character is
known, the algorithm constructs a binary tree is called
Huffman Tree. The tree is built upon starting with each
character as a leaf node and then merging the two nodes with
the lowest frequencies at each step until all nodes are merged
into a single root.
Huffman Coding Algorithm
3. Assign Codes:- After constructing the Huffman tree, the
algorithm traverses the tree to assign binary codes to each
character. It does this by assigning ‘0’ for each left branch and
a ‘1’ for each right branch while traversing from root to each
leaf node. As a result, each character gets a unique binary code
, which forms the Huffman Code.
4. Encoding:- With the Huffman tree and assigned codes , the
algorithm can encode the input text by replacing each character
with its corresponding Huffman Code. The encode text is
typically more compact than the original text since frequently
occurring characters are assigned shorter codes.
5. Decoding:- To decode the encoded text, the algorithm uses the
Huffman tree. Starting from root , it traverses the tree
following the encoded bits until it reaches a leaf node, which
represents a character in the original text. This process
continues until the entire encoded message is decoded back
into its original form.
Huffman Coding Algorithm:- Example
MST-Prim’s Algorithm
► Definition:-Prim's Algorithm is a greedy algorithm used to
find the minimum spanning tree (MST) of a connected,
undirected graph. The minimum spanning tree of a graph is a
subset of its edges that form a tree and connect all vertices
together with minimum possible total edge weight.
► Here's a conceptual overview of how Prim's Algorithm
works:.
1. Initialization: Choose an arbitrary starting vertex as the root of
the MST. Initialize a priority queue (min-heap) to store edges
with their weights.

2. Selecting Edges: Repeat the following steps until all vertices


are included in the MST: At each step, consider all the edges
that connect the vertices in the MST to the vertices outside the
MST.
Choose the edge with the minimum weight among those that
connect the MST to the vertices outside it.
MST-Prim’s Algorithm
3. Including Vertices: Add the selected edge and the corresponding
vertex to the MST. Update the priority queue with the newly
included vertex's edges.
4. Termination: Continue the process until all vertices are included in
the MST.
5. Output: The MST is formed by the selected edges and vertices.
Key points about Prim's Algorithm:
► It always starts with a single vertex and grows the MST one vertex
at a time.
► It guarantees that the final tree is connected and contains n−1n−1
edges for a graph with nn vertices.
► It may not be unique; there can be multiple MSTs for a given graph
if there are ties in edge weights.
► It has a time complexity of O(ElogV) using a binary heap or
priority queue, where E is the number of edges and V is the
number of vertices.
MST-Prim’s Algorithm
Prim’s Algorithm: Like Kruskal, Prim’s algorithm also works on
greedy approach.
For each vertex U
U.Key = ∞
U.Parent = Null
Any random vertex R
R.Key = 0
Min-Priority Queue Q = G.V (priority on vertex’s key)
While Q != Empty
U = Q.Dequeue()
For each V in G.Adj[U]
If V in Q And Weight(U, V) < V.Key
V.Parent = U
V.Key = Weight(U, V)
MST-Prim’s Algorithm
Step-1 Step-2 Step-3

Step-5 Step-6
Step-4

Step-7
MST-Kruskal’s Algorithm
► Definition:-Kruskal's algorithm is a method used to find the
minimum spanning tree (MST) of a connected, undirected
graph. A spanning tree of a graph is a sub-graph that is a tree
and connects all the vertices together with the minimum
possible total edge weight. The key idea behind Kruskal's
algorithm is to start with an empty graph and iteratively add the
lightest edges that do not form cycles until all vertices are
connected.
► Here's a step-by-step explanation of Kruskal's algorithm:
1. Sort Edges: First, sort all the edges of the graph by their
weights in non-decreasing order.
2. Initialize: Create an empty graph to hold the MST.
3. Iterate through Edges: Starting with the smallest edge,
iterate through the sorted list of edges.
MST-Kruskal’s Algorithm
4. Check for Cycles: For each edge, check if adding it to the
MST would create a cycle. This can be done efficiently using
techniques such as Union-Find data structure.
5. Add Edge: If adding the edge does not create a cycle, add it
to the MST.
6. Repeat: Repeat steps 4 and 5 until all vertices are connected
or until the MST has n-1 edges, where n is the number of
vertices in the graph.
7. Output: The resulting graph is the minimum spanning tree.
Kruskal's algorithm is efficient and typically runs in O(E log E)
time, where E is the number of edges in the graph, primarily due
to the sorting step. It's often preferred for sparse graphs, where
the number of edges is much less than the maximum possible
number of edges.
MST-Kruskal’s Algorithm
► Kruskal’s Algorithm: Kruskal’s algorithm works on greedy
approach, it takes edges first which are smaller in weight.
► Define an empty List A = [ ]
► For each vertex V
► Make-Set(V)
► Sort edges of graph order by weight
► For each edge E (u, v)
► If Find-Set(u) != Find-Set(v)
► Append E (u, v) in A
► Union (u, v)

► Return A
MST-Kruskal’s Algorithm
The Algorithm will pick each edge starting from lowest
weight, look below how algorithm works:
Step-1 Step-2 Step-3

Step-4 Step-5 Step-6

Step-7
Fractional Knapsack Problem
► Definition:-The Fractional Knapsack Problem is a classic
optimization problem in computer science and mathematics. It
involves selecting a combination of items with given weights
and values to maximize the total value while not exceeding a
given weight limit (the capacity of the knapsack).
Here's a brief overview of the problem:
1. Input: Given a set of items, each with a weight wi and a
value vi, and a knapsack with a maximum weight capacity W.
2. Objective: Maximize the total value of items that can be
put into the knapsack.
3. Constraints: The total weight of items selected cannot
exceed the capacity of the knapsack. Items can be taken
partially.
4. Approach: Greedy algorithm.
Fractional Knapsack Problem

Job Sequencing Problem
► Definition:- The Job Sequencing Problem (JSP) is
another classical problem in the field of optimization
and scheduling. It involves a set of jobs, each having a
specific deadline and profit associated with it. The
objective is to schedule the jobs in such a way that the
total profit is maximized, while also meeting the given
deadlines.
► Here's a detailed explanation of the problem:
1. Input:
► n jobs, each characterized by three parameters:
► A unique identifier (job ID).
► A deadline by which the job needs to be completed.
► A profit that can be earned if the job is completed on time.
Job Sequencing Problem
2. Objective:-Maximize the total profit earned by
scheduling the jobs, considering their deadlines.
3. Constraints:-Each job can be completed within its
deadline only. Each job can be completed only once.
4. Approach: Greedy algorithm.
Job Sequencing Problem
The greedy approach to solving the Job Sequencing Problem
involves the following steps:
► Sort the jobs based on their profits in non-increasing
order: This ensures that we consider the jobs with higher
profits first, as they contribute more to the total profit.
► For each job, assign it to the latest possible slot before its
deadline, without conflicting with other jobs already
assigned to slots: This step ensures that we prioritize jobs
with higher profits while also meeting their deadlines.
► Repeat this process for all jobs: Assign jobs to available
slots based on their deadlines and the slots' availability.
► Calculate the total profit earned from the assigned jobs:
Sum up the profits of all the jobs that have been successfully
scheduled.
Job Sequencing Problem
Problem :- A problem is given with 7
jobs associated with their profits and
deadlines. Find the maximum profit
using Job Sequencing Method
Job Sequencing Problem
Dijkstra’s Algorithm
Definition:-Dijkstra's algorithm is a fundamental algorithm used for
finding the shortest path from a single source vertex to all other
vertices in a weighted graph with non-negative edge weights. It is
commonly used in various applications such as network routing
protocols, GPS navigation systems, and traffic management systems.
Key Components:
1. Graph Representation: The input to Dijkstra's algorithm is a
weighted graph, which can be represented using adjacency lists,
adjacency matrices, or any other suitable data structure.
2. Distance Array: Dijkstra's algorithm maintains an array to
store the shortest distance from the source vertex to each vertex
in the graph. Initially, all distances are set to infinity except for
the distance from the source vertex to itself, which is set to zero.
3. Priority Queue: Dijkstra's algorithm uses a priority queue to
efficiently select the vertex with the smallest distance from the
source vertex among the vertices that have not yet been visited.
Dijkstra’s Algorithm
Algorithm Steps:
1. Initialization: Initialize the distance array. Set the
distance of the source vertex to zero and the distance
of all other vertices to infinity.
2. Main Loop: Repeat the following steps until all
vertices have been visited:
► Select the vertex with the smallest distance from the
source vertex. This vertex becomes the current vertex.
► Mark the current vertex as visited.
► Update the distances of all adjacent vertices of the
current vertex if a shorter path is found through the
current vertex.
Dijkstra’s Algorithm
3. Relaxation Step: For each adjacent vertex v of the
current vertex u, if the distance from the source to u
plus the weight of the edge (u, v) is less than the
current distance to v, update the distance to v with this
shorter distance.
4. Termination: After all vertices have been visited, the
distance array contains the shortest distances from the
source vertex to all other vertices in the graph.
Time Complexity:
The time complexity of Dijkstra's algorithm depends on
the data structure used to implement the priority queue.
Using a binary heap or Fibonacci heap, Dijkstra's
algorithm has a time complexity of O((V + E) log V),
where V is the number of vertices and E is the number of
edges in the graph.
Dijkstra’s Algorithm
Pseudo Code:-
Dijkstra( Graph, Source)
Create vertex Set Q
For each vertex v in graph
Dist[v]=∞
Add v to Q
Dist[source]=0
While Q is not empty
U=extract.min[Q]
For each neighbour v of u
Relax (u,v)
Dijkstra’s Algorithm :- Example
Question:-1 Find shortest path from vertec C to rest of all the vertices
using Dijkstra’s Algorithm.

STEP-1

STEP-2
Dijkstra’s Algorithm :- Example
STEP-3 STEP-4

STEP-5 STEP-6
Dijkstra’s Algorithm :- Example
STEP-7 STEP-8

STEP-9
STEP-10

STEP-11
Bellman-Ford Algorithm
Definition:-The Bellman-Ford algorithm is a fundamental
algorithm in graph theory used for finding the shortest
paths from a single source vertex to all other vertices in a
weighted graph, even if the graph contains negative weight
edges (as long as there are no negative weight cycles
reachable from the source).
Here's a basic outline of how the Bellman-Ford
algorithm works:
1. Initialize distances: Assign a tentative distance value
to every vertex. Set the distance to the source vertex as
0 and all other vertices' distances as infinity.
2. Relax edges repeatedly: Iterate through all edges and
update the distance of adjacent vertices if a shorter
path is found.
Bellman-Ford Algorithm
3. Repeat step 2 for a total of V-1 times, where V is the
number of vertices in the graph. This guarantees finding
the shortest path for graphs without negative cycles.
4. Check for negative cycles: After V-1 iterations, if there
are still updates happening, it means there is a negative
weight cycle in the graph reachable from the source vertex.
The time complexity of the Bellman-Ford algorithm is O(V*E),
where V is the number of vertices and E is the number of edges
in the graph.
While the Bellman-Ford algorithm is less efficient than
Dijkstra's algorithm for finding shortest paths in graphs without
negative weights, it has the advantage of handling negative
weight edges and detecting negative weight cycles.
Bellman-Ford Algorithm:-Example
Step-1 Step-2 Step-3

Step-4

Table for Shortest Path

You might also like