Daa Unit Iv
Daa Unit Iv
Shumama Ansa
• Greedy Method: General Method
• Job sequencing with deadlines
• Knapsack Problem
• Minimum Cost Spanning Tree
• Single Source Shortest Path Problem
• Basic Traversal and search techniques: Techniques
for binary trees and graphs,
• Connected Components
• Biconnected Components
GREEDY METHOD
• Greedy method is the most straightforward design
technique. As the name suggest they are short sighted in
their approach taking decision on the basis of the
information immediately at the hand without worrying
about the effect these decision may have in the future.
DEFINITION:
• A problem with N inputs will have some constraints. Any
subsets that satisfy these constraints are called a feasible
solution.
• A feasible solution that either maximize can minimize a
given objectives function is called an optimal solution.
• Greedy method suggests that one can devise an
algorithm that works in stages, considering one input
at a time.
• At each stage, a decision is made regarding whether a
particular input is in an optimal solution.
• This is done by considering the inputs in an order
determined by some selection procedure.
• If the inclusion of the next input into the partially
constructed optimal solution will result in an infeasible
solution, then this input is not added to the partial
solution. Other wise, it is added.
• The selection procedure itself is based on
some optimization measure.
• This measure may be the objective function.
• This version of the greedy technique is called
the subset paradigm.
Algorithm Greedy (a,n) //a[1:n] contain the ‘n’ inputs
{
solution =0;//Initialise the solution.
For i=1 to n do
{
x=select(a);
if(feasible(solution,x))then
solution=union(solution,x);
}
return solution;
}
Job Sequencing With Deadlines
• Given a set of ‘n’ jobs. Associated with each Job i,
deadline di >0 and profit Pi >0. For any job ‘i’ the
profit pi is earned iff the job is completed by its
deadline.
• Only one machine is available for processing jobs.
• A feasible solution here is a subset of jobs
such that each job in this subset is completed
by its deadline
• An optimal solution is the feasible solution with
maximum profit.
• Sort the jobs in ‘j’ ordered by their deadlines. The
array d [1 : n] is used to store the deadlines of the
order of their p-values. The set of jobs j [1 : k]
such that j [r], 1 ≤ r ≤ k are the jobs in ‘j’ and d (j
[1]) ≤ d (j[2]) ≤ . . . ≤ d (j[k]).
• To test whether J U {i} is feasible, we have just to
insert i into J preserving the deadline ordering
and then verify that d [J[r]] ≤ r, 1 ≤ r ≤k+1.
Example:
• n=4,(P1,P2,P3,P4,)=(100,10,15,27)and(d1
d2 d3 d4)=(2,1,2,1).The feasible solutions
and their values are:
• To formulate a greedy algorithm to obtain an optimal solution,
we must formulate an optimization measure to determine how
the next job is chosen.
• As a first attempt we can choose the objective function ∑i€JPi
as our optimization measure.
• Using this measure, the next job to include is the one that
increases ∑i€JPi the most, subject to the constraint that the
resulting J is a feasible solution. This requires us to consider
jobs here are in non increasing order of the pi’ s.
• Hence, high profit is of job 1
• J = {1} is feasible
• Next j4 is considered, so j={1, 4} is feasible
• Next j3 is considered and discarded, j= {1, 3, 4}
is not feasible.
• finally j2 is considered and discarded, j= {1, 2,
3, 4} is not feasible.
• Hence solution J={1,4} with profit 127 is
optimal solution
• Jobs are ordered in non decreasing order of
deadlines
The algorithm constructs an optimal set J of jobs that can be processed by their
deadlines.
Algorithm GreedyJob (d, J,n) // J is a set of jobs that can be completed by their
deadlines.
{
J :={1};
for i := 2 to ndo
{
if (all jobs in J U {i} can be completed by their deadlines) then J := J
U{i};
}
}
The greedy algorithm is used to obtain an optimal solution.
Algorithm js(d, j, n)
{
d[0]=j[0]=0;
j[1]=1;
k=1;
for i=2 to n do
{
r=k;
while((d[j[r]]>d[i]) and (d[j[r]]≠r)) do r=r-1;
if((d[j[r]]≤d[i]) and (d[i]> r)) then
{
for q:=k to (r+1) setp-1 do j[q+1]= j[q];
j[r+1]=i;
k=k+1;
}
}
return k;
}
Knapsack Problem
• we are given n objects and knapsack or bag with
capacity m
• Object ‘i’ has a weight Wi where i varies from 1 to n.
• The problem is we have to fill the bag with the help of
n objects and the resulting profit has to be maximum.
• If a fraction xi, 0 < xi < 1 of object i is placed into the
knapsack then a profit of pi xi is earned. The objective
is to fill the knapsack that maximizes the total profit
earned.
• Since the knapsack capacity is ‘m’, we require the total
weight of all chosen objects to be at most ‘m’.
• Formally the problem can be stated as
Maximize ∑ pixi ------------------(1)
subject to ∑ WiXi<=M--------(2)
and 0 <Xi < 1, 1<i <n -------------------(3)
• Where Xi is the fraction of object and it lies
between 0 to 1.
• A feasible solution is any set (x1,x2…xn)
satisfying 2 and 3 above.
• An optimal solution is a feasible solution for
which 1 is maximised.
• There are so many ways to solve this problem, which
will give many feasible solution for which we have to
find the optimal solution.
• But in this algorithm, it will generate only one
solution which is going to be feasible as well as
optimal.
• First, we find the profit & weight rates of each and
every object and sort it according to the descending
order of the ratios.
• Select an object with highest p/w ratio and check
whether it is lesser than the capacity of the bag.
• If so place 1 unit of the first object and decrement the
capacity of the bag by the weight of the object you
have placed.
• Repeat the above steps until the capacity of the bag
becomes less than the weight of the object you have
selected .in this case place a fraction of the object and
come out of the loop.
Algorityhm Greedy knapsack (m,n) //P[1:n] and the w[1:n]contain the
profit // & weight//n is the Knapsack size and x[1:n] is the solution
vertex.
{
for i=1 to n do x[i]=0.0;
U=m;
for i=1 to n do
{
if (w[i]>U)then break;
x[i]=1.0;
U=U-w[i]
}
if(i<=n)then x[i]=U/w[i];
}
• Consider the following instance of the knapsack
problem where n=7, m=15,
(p1,p2,p3,p4,p5,p6,p7) = (10, 5, 15, 7, 6, 18, 7)
and
(w1,w2,w3,w4,w5,w6,w7) = (2, 3, 5, 7, 1, 4, 1)
Objec 1 2 3 4 5 6 7
ts (O)
P 10 5 15 7 6 18 3
W 2 3 5 7 1 4 1
Considering profits
• If we consider adding weights of higher profit
Object 6 has higher profit
If object 6 is included then remaining weight =
15-4 = 11 and profit =18
Next, higher profit is of object 3, so remaining
weight = 11-5 = 6 and profit = 33
Objec 1 2 3 4 5 6 7
ts (O)
P 10 5 15 7 6 18 3
W 2 3 5 7 1 4 1
Next, higher profit is of object 1, so remaining
weight = 6-2 = 4 and profit = 43
Next, higher profit is of object 4, but remaining
weight = 4 and weight of object 4 is 7 so only
4/7 weight is included. Hence remaining weight
is zero and the profit = 43 + ((4/7)*7) = 47
Objec 1 2 3 4 5 6 7
ts (O)
P 10 5 15 7 6 18 3
W 2 3 5 7 1 4 1
Considering weights
• If we consider adding objects of higher weight
Object 4 has higher weight
If object 4 is included then remaining weight =
15-7 = 8 and profit =7
Next, higher weight is of object 3, so remaining
weight = 8-5 = 3 and profit = 22
Objec 1 2 3 4 5 6 7
ts (O)
P 10 5 15 7 6 18 3
W 2 3 5 7 1 4 1
Next, higher weight is of object 6, whose weight
is 4 but left over space is 3 so include ¾ of its
weight in the bag. Hence remaining weight
becomes 0 and profit = 35.5
Objec 1 2 3 4 5 6 7
ts (O)
P 10 5 15 7 6 18 3
W 2 3 5 7 1 4 1
Considering profit/weights ratio
• If we consider adding objects based on p/w
ratio
• Highest p/w is of object 5, so object 5 is
included remaining weight = 15-1 = 14 and
profit =6
Objec 1 2 3 4 5 6 7
ts (O)
P 10 5 15 7 6 18 3
W 2 3 5 7 1 4 1
W 2 3 5 7 1 4 1
W 2 3 5 7 1 4 1
Objec 1 2 3 4 5 6 7
ts (O)
P 10 5 15 7 6 18 3
W 2 3 5 7 1 4 1
Objec 1 2 3 4 5 6 7
ts (O)
P 10 5 15 7 6 18 3
W 2 3 5 7 1 4 1
1 28 10
∞
2 28 16 14
3 16 12
4 12 22 18
5 22 25 24
6 10 25
7 14 18 24
Algorithm prims(e,cost,n,t)
{
Let (k,l) be an edge of minimum cost in E;
Mincost :=cost[k,l];
T[1,1]:=k; t[1,2]:=l;
For I:=1 to n do
If (cost[i,l]<cost[i,k]) then near[i]:=l;
Else near[i]:=k;
Near[k]:=near[l]:=0;
For i:=2 to n-1 do
{
Let j be an index such that near[j]≠0 and
Cost[j,near[j]] is minimum;
T[i,1]:=j; t[i,2]:=near[j];
Mincost:=mincost+ Cost[j,near[j]];
Near[j]:=0;
For k:=0 to n do
If near((near[k]≠0) and (Cost[k,near[k]]>cost[k,j])) then
Near[k]:=j;
}
Return mincost;
}
KRUSKAL’S ALGORITHM:
• In kruskal's algorithm the selection function chooses edges in
increasing order of length without worrying too much about their
connection to previously chosen edges, except that never to form a
cycle.
• The result is a forest of trees that grows until all the trees in a forest
(all the components) merge in a single tree.
• In this algorithm, a minimum cost-spanning tree ‘T’ is built edge by
edge.
• Edge are considered for inclusion in ‘T’ in increasing order of their
cost.
• An edge is included in ‘T’ if it doesn’t form a cycle with edge already
in T.
• To find the minimum cost spanning tree the edge are inserted to
tree in increasing order of their cost
Algorithm kruskal(E,cost,n,t)
//Eset of edges in G has ‘n’ vertices.
//cost[u,v]cost of edge (u,v).tset of edge in minimum cost
spanning tree// the first cost is returned.
{
for i=1 to n do parent[I]=-1;
i=0;mincost=0.0;
While((i<n-1)and (heap not empty)) do
{
j=find(u);
k=find(v);
if(j ≠ k) then
{
i=i+1
t[i,1]=u;
t[i,2]=v;
mincost=mincost+cost[u,v];
union(j,k);
}
}//while
if(i ≠ n-1) then write(“No spanning tree”)
else return minimum cost;
}
Single-source shortest path:
• Graphs can be used to represent the highway structure
of a state or country with vertices representing cities
and edges representing sections of highway.
• The edges can then be assigned weights which may be
either the distance between the two cities connected
by the edge or the average time to drive along that
section of highway.
• A motorist wishing to drive from city A to B would be
interested in answers to the following questions:
– Is there a path from A to B?
– If there is more than one path from A to B? Which is the
shortest path?
• In the single source, all destinations, shortest path
problem, we must find a shortest path from a
given source vertex to each of the vertices (called
destinations) in the graph to which there is a path.
• Dijkstra’s algorithm takes a labeled graph and a
pair of vertices P and Q, and finds the shortest
path between them (or one of the shortest paths)
if there is more than one.
• The principle of optimality is the basis for Dijkstra’s
algorithms. Dijkstra’s algorithm does not work for
negative edges at all.
V1 V2 V3 V4 V5 V6
V1 - 50 10 Inf 45 Inf
V2 Inf - 15 Inf 10 Inf
V3 20 Inf - 15 inf Inf
V4 Inf 20 Inf - 35 Inf
V5 Inf Inf Inf 30 - Inf
V6 Inf Inf Inf 3 Inf -
7
2 4
2
1
1 6
1 2
5
4
3 5
3
• Starting vertex 1
2 ∞
7
2 4
2
1
6
∞
1
1 2
5
4
3 5
3
4 ∞
• Select shortest path from 2,4,∞ and perform
relaxation. Shortest = v2
2 9
7
2 4
2
1
6
∞
1
1 2
5
4
3 5
3
3 ∞
• V3 and v4 are relaxed. Select next smallest
v3
2 9
7
2 4
2
1
6
∞
1
1 2
5
4
3 5
3
3 6
• V5 is relaxed. Next smallest is v5.
2 8
7
2 4
2
1
6
11
1
1 2
5
4
3 5
3
3 6
• V4 and v6 are relaxed. Next smallest is v4.
2 8
7
2 4
2
1
6
9
1
1 2
5
4
3 5
3
3 6
Vertex(v) dist[v]
2 2
3 3
4 8
5 6
6 9
Graphs
Graph G is a pair (V, E), where V is a finite set (set of vertices)
and E is a finite set of pairs from V (set of edges). We will often
denote n := |V|, m:=|E|.
• The notation V(G) and E(G) represents the set of vertices and
edges.
– G(V,E) represents a graph
• In an undirected graph, the pair of vertices representing any edge
is unordered. (u,v) = (v,u)
• In a directed graph each edge is represented by a directed pair
(u,v); u is tail and v is head.
Examples for Graph
0 0 0
1 2 1 2
1
3
3 4 5 6
G1 2
G2
complete graph incomplete graph G3
V(G1)={0,1,2,3} E(G1)={(0,1),(0,2),(0,3),(1,2),(1,3),(2,3)}
V(G2)={0,1,2,3,4,5,6} E(G2)={(0,1),(0,2),(1,3),(1,4),(2,5),(2,6)}
V(G3)={0,1,2} E(G3)={<0,1>,<1,0>,<1,2>}
complete undirected graph: n(n-1)/2 edges
complete directed graph: n(n-1) edges
Complete Graph
A complete graph is a graph that has the
maximum number of edges
– for undirected graph with n vertices, the maximum
number of edges is n(n-1)/2
– for directed graph with n vertices, the maximum
number of edges is n(n-1)
– example: G1 is a complete graph
Adjacent and Incident
If (v0, v1) is an edge in an undirected graph,
– v0 and v1 are adjacent
– The edge (v0, v1) is incident on vertices v0 and v1
If <v0, v1> is an edge in a directed graph
– v0 is adjacent to v1, and v1 is adjacent from v0
– The edge <v0, v1> is incident on v0 and v1
*Figure 6.3:Example of a graph with feedback loops and a
multigraph (p.260)
0
0 2 1 3
1 2
self edge multigraph:
(a) (b) multiple occurrences
of the same edge
Figure 6.3
Subgraph and Path
A subgraph of G is a graph G’ such that V(G’)
is a subset of V(G) and E(G’) is a subset of E(G)
A path from vertex vp to vertex vq in a graph G,
is a sequence of vertices, vp, vi1, vi2, ..., vin, vq,
such that (vp, vi1), (vi1, vi2), ..., (vin, vq) are edges
in an undirected graph
The length of a path is the number of edges on
it
0 subgraphs of G1
1 2
3
G1
0 0 1 2 0
1 2 3 1 2
3
(i) (ii) (iii) (iv)
(a) Some of the subgraph of G1
Simple Path and cycle
A simple path is a path in which all vertices,
except possibly the first and the last, are distinct
A cycle is a simple path in which the first and
the last vertices are the same
In an undirected graph G, two vertices, v0 and v1,
are connected if there is a path in G from v0 to v1
An undirected graph is connected if, for every
pair of distinct vertices vi, vj, there is a path
from vi to vj
connected
0 0
1 2 1 2
3
3 4 5 6
G1
G2
tree (acyclic graph)
Connected Component
A connected component of an undirected graph
is a maximal connected subgraph.
A tree is a graph that is connected and acyclic.
A directed graph is strongly connected if there
is a directed path from vi to vj and also
from vj to vi.
A strongly connected component is a maximal
subgraph that is strongly connected.
*Figure 6.5: A graph with two connected components (p.262)
connected component (maximal connected subgraph)
H1 0 H2 4
2 1 5
3 6
G4 (not connected)
*Figure 6.6: Strongly connected components of G3 (p.262)
strongly connected component
not strongly connected (maximal strongly connected subgraph)
0
0 2
1
2
G3
Degree
The degree of a vertex is the number of edges
incident to that vertex
For directed graph,
– the in-degree of a vertex v is the number of edges
that have v as the head
– the out-degree of a vertex v is the number of edges
that have v as the tail
– if di is the degree of a vertex i in a graph G with n
vertices and e edges, the number of edges is
n 1
e( d ) / 2
0
i
undirected graph
degree
3 0
0 2
1 2
3 1 23 3 3
3 4 5 6
3
G1 3 1 1 G2 1 1
0 in:1, out: 1
directed graph
in-degree
out-degree 1 in: 1, out: 2
2 in: 1, out: 0
G3
Graph Representations
Adjacency Matrix
Adjacency Lists
Adjacency multilists
Adjacency Matrix
Let G=(V,E) be a graph with n vertices.
The adjacency matrix of G is a two-dimensional
n by n array, say adj_mat
If the edge (vi, vj) is in E(G), adj_mat[i][j]=1
If there is no such edge in E(G), adj_mat[i][j]=0
The adjacency matrix for an undirected graph is
symmetric; the adjacency matrix for a digraph
need not be symmetric
Examples for Adjacency Matrix
0 0 4
0
2 1 5
1 2
3 6
3 1
0 1 1 1 0 1 0
1 0 1 1 7
1 0 1
2 0 1 1 0 0 0 0 0
1 1 0 0 0 1 0
1 0
0 0 1 0 0 0
1 1 1 0
1 0
G2 0 0 1 0 0 0
G1
0 1 1 0 0 0 0 0
0 0 0 0 0 1 0 0
0 0 0 0 1 0 1 0
symmetric 0 0 0 0 0 1 0 1
undirected: n2/2 0 0 0 0 0 0 1 0
directed: n2
G4
Merits of Adjacency Matrix
From the adjacency matrix, to determine the
connection of vertices is easy n 1