0% found this document useful (0 votes)
30 views20 pages

Daa 2

The document discusses dynamic programming and traversal techniques, including multi-stage graphs, shortest path problems, and optimal binary search trees. It explains the principles of dynamic programming, its comparison with greedy methods, and the optimality principle. Additionally, it provides algorithms and recurrence relations for various problems, emphasizing the importance of dynamic programming in optimization tasks.

Uploaded by

ammmir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views20 pages

Daa 2

The document discusses dynamic programming and traversal techniques, including multi-stage graphs, shortest path problems, and optimal binary search trees. It explains the principles of dynamic programming, its comparison with greedy methods, and the optimality principle. Additionally, it provides algorithms and recurrence relations for various problems, emphasizing the importance of dynamic programming in optimization tasks.

Uploaded by

ammmir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 20

UNIT – III

Dynamic Programming and Traversal Techniques: -


Multi Stage Graphs, All Pairs Shortest Path, Optimal Binary
Search trees, 0/1 Knapsack Problem, Reliability Design, Traveling
Salesman Problem. Depth first Search, Breadth First Search,
Biconnected Components and DFS.

Dynamic Programming:-
Q) What is Dynamic Programming? Explain.

Ans:-
Dynamic Programming is an algorithm design technique that can be
used when the solution to a problem may be viewed as the result of a
sequence of decisions. Dynamic Programming often reduces the amount
of enumeration by avoiding the enumerations of some decision
sequences that cannot possibly be optimal, which otherwise would be
required in the Adhoc approach. An optimal sequence of decision is
arrived at by making appear to principle of optimality.
Dynamic Programming, like the divide-and-conquer method, solves
problems by combining the solutions to sub problems. DAndC algorithm
partitions the problem into independent sub problems, solve the sub
problems recursively, and then combine their solutions to solve the
original problem. In contrast, dynamic programming is applicable when
the sub problems are not independent, i.e., when sub problems share sub
problems. A Dynamic Programming algorithm solves every sub problem
just once and then saves its answer in a table, thereby avoiding the work
of recomputing the answer every time the sub problem is encountered.
Dynamic Programming is typically applied to optimization problems.
In such problems there can be many possible solutions. Each solution has
a value, and we wish to find a solution with the optimal (maximum or
minimum) value. The dynamic programming algorithm can be broken into
a sequence of four steps,
1. Characterize the structure an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution in a bottom-up
fashion.
4. Construct an optimal solution from computed information.
Steps 1-3 form the basis for the dynamic programming solution for
a problem. Step 4 can be omitted if only the value of an optimal
solution is required.

Greedy method Vs Dynamic Programming:-


Q) Differentiate between greedy method and dynamic
programming.
Q) What is the difference between greedy and dynamic
programming methods?
Q) Distinguish between greedy approach and dynamic
programming with examples.
Ans:-
1. In greedy method only one decision sequence is ever generated.
Whereas in dynamic programming, many decision sequences may
be generated.
2. The greedy method is a straight forward method for choosing the
optimum solution. It selects an optimum solution without revising
the previous solution. Whereas dynamic programming considers all
possible sequences in order to obtain the optimum solution.
3. Both the methods are used for obtaining optimum solution.
4. The solution is guaranteed in dynamic programming using principle
of optimality but in case of greedy method, the solution is not
guaranteed.

Optimality Principle:-

Q) What is the principle of optimality? Explain with the help of an


example.
Q) Explain the principle of optimality with the help of an example.
Oct-Ans:-
In D.P an optimum sequence of decisions is obtained by making
explicit use of principle of optimality. The principle of optimality states
that an optimal sequence of decisions has the property that whatever the
initial state and decisions are the remaining decisions must consequent
an optimal decision sequence with regard to the state resulting from the
first decision.
Examples:-

Multi Stage Graphs:-


Q) Describe about multistage graphs.
Q) What is multistage graph? Give an example problem which can
be viewed as multistage graph problem, and formulate dynamic
programming recurrence relation.
Q) What is a multistage graph? Write dynamic programming
expression for forward approach and backward approach and
explain the same.
Q) Write the algorithm for multistage graph problem using
forward approach. What is its timing complexity?
Q) Derive the recurrence of multistage graph using backward
approach. Write the algorithm for the same.
Q) With a suitable graph, Explain how the multistage graph
problem is solved. Derive its recurrence relation and also its
timing complexity.

All Pairs Shortest Paths:-


Q) Discuss in detail about all pairs shortest path problem.
Q) Write an algorithm for the solution of all pairs shortest path
problem. Determine its time complexity.
Q) Write an algorithm for the all pairs shortest path problem.
Q) Write an algorithm to solve the all pairs shortest paths
problem. Ans:-
Problem Definition:-
Let G=<V,E> be a directed graph with ‘n’ vertices. Cost is an nn matrix
where
cost[i, i]=0
Cost[i, j] = length of edge<i,j>
Cost[i,j]= if ij and the edge <i,j>E The Problem is to determine a
matrix ‘A’ such that A[i,j] = shortest path length from i to j.

Solution:-
The shortest path originates at vettex ‘i' and goes through some
intermediate vertices(possibly none) and terminates at vertex ‘j’. This
path contains no cycles for if there is a cycle then this may be deleted
without increasing the path length. If ‘k’ and from ‘k’ to ‘j’ there must be
a shortest path from ‘i' to ‘k’ and from ‘k’ to ‘j’. Otherwise the ‘i’ to ‘j’ is
going through no vertex greater than ‘k’.
We get,
A[i, j]=min{min{Ak-1[i, k]+Ak-1[k, j]},c[i, j]}
A0[i, j]=c[i, j], 1≤i≤n, 1≤j≤n

A shortest path from ‘i' to ‘j’ going through no vertex higher than ‘k’
either goes through vertex k or it does not. If it does
Ak[i,j]=Ak-1[i,k]+Ak-1[k,j].
If it does not no intermediate vertex has index greater than k-1.
Ak [i,j]=Ak-1[i,j] Combining we get
Ak [i,j]=min{Ak-1[i,j] Ak-1[i,k]+ Ak-1[k,j]}, k≥1.

Procedure:-

1) From the given graph obtain the cost matrix and represent it with A 0.
2) Exercise no. of iterations required until An is calculated.
3) A1 to An are calculated using the formula given below
a) A[i, j]=min{min{Ak-1[i, k]+Ak-1[k, j]},c[i, j]}
A0[i, j]=c[i, j], 1≤i≤n, 1≤j≤n
Ak [i,j]=min{Ak-1[i,j] Ak-1[i,k]+ Ak-1[k,j]}, k≥1.

Example:- Solve the following instance of all pairs shortest path problem.
6

1 2
4

11
2
3

3
Solution:-
Step 1:-
Construct the cost matrix and represent it as A0

A
0 1 2 3
1
1 0 4
1
2 6 0 2
3 3  0
Step 2:-

Calculate for k=1.

A1(1,1) = min{A0(1,1),A0(1,1)+A0(1,1)}
= min{0,0+0}=0

A1(1,2) = min{A0(1,2),A0(1,1)+A0(1,2)}
= min{4,0+4}= 4

A1(1,3) = min{A0(1,3),A0(1,1)+A0(1,3)}
= min{11,0+11}= 11

A1(2,1) = min{A0(2,1),A0(2,1)+A0(1,1)}
= min{6,6+0}= 6

A1(2,2) = min{A0(2,2),A0(2,1)+A0(1,2)}
= min{0,6+4}= 0

A1(2,3) = min{A0(2,3),A0(2,1)+A0(1,3)}
= min{2,6+11}= 2
A (3,1) = min{A0(3,1),A0(3,1)+A0(1,1)}
1

= min{3,3+0}= 3

A1(3,2) = min{A0(3,2),A0(3,1)+A0(1,2)}
= min{,3+4}= 7

A1(3,3) = min{A0(3,3),A0(3,1)+A0(1,3)}
= min{0,3+11}= 0

From the above calculated values we obtain A1 as

A
1 1 2 3
1
1 0 4
1
2 6 0 2
3 3 7 0

Step 3:-
Calculate for k=2.

A2(1,1) = min{A1(1,1),A1(1,2)+A1(2,1)}
= min{0,4+6}= 0

A2(1,2) = min{A1(1,2),A1(1,2)+A1(2,2)}
= min{4,4+0}= 4

A2(1,3) = min{A1(1,3),A1(1,2)+A1(2,3)}
= min{11,4+2}= 6

A2(2,1) = min{A1(2,1),A1(2,2)+A1(2,1)}
= min{6,0+6}= 6

A2(2,2) = min{A1(2,2),A1(2,2)+A1(2,2)}
= min{0,0+0}= 0

A2(2,3) = min{A1(2,3),A1(2,2)+A1(2,3)}
= min{2,0+2}= 2

A2(3,1) = min{A1(3,1),A1(3,2)+A1(2,1)}
= min{3,+ }= 3

A2(3,2) = min{A1(3,2),A1(3,2)+A1(2,2)}
= min{7,7+0}= 7
A2(3,3) = min{A1(3,3),A1(3,2)+A1(2,3)}
= min{0,7+2}= 0

From the above calculated values we obtain A2 as

A
2 1 2 3

1 0 4 6
2 6 0 2
3 3 7 0

Step 4:-
Calculate for k=3.

A3(1,1) = min{A2(1,1),A2(1,3)+A2(3,1)}
= min{0,6+3}= 0
A3(1,2) = min{A2(1,2),A2(1,3)+A2(3,2)}
= min{4,6+7}= 4

A3(1,3) = min{A2(1,3),A2(1,3)+A2(3,3)}
= min{6,6+0}= 6

A3(2,1) = min{A2(2,1),A2(2,3)+A2(3,1)}
= min{6,2+3}= 5

A3(2,2) = min{A2(2,2),A2(2,3)+A2(3,2)}
= min{0,2+7}= 0

A3(2,3) = min{A2(2,3),A2(2,3)+A2(3,3)}
= min{2,2+0}= 2

A3(3,1) = min{A2(3,1),A2(3,3)+A2(3,1)}
= min{3,0+3}= 3

A3(3,2) = min{A2(3,2),A2(3,3)+A2(3,2)}
= min{7,0+7}= 7

A3(3,3) = min{A2(3,3),A2(3,3)+A2(3,3)}
= min{0,0+0}= 0

From the above calculated values we obtain A2 as

A
3 1 2 3

1 0 4 6
2 5 0 2
3 3 7 0

Algorithm AllPaths(cost,A,n)
//Cost[1:n,1:n] is the cost adjacency matrix of a graph with n vertices;
//A[i,j] is the cost of a shortest path from vertex I to vertex
j.cost[i,i]=0.0,for //1≤i≤n
{
for i:=1 to n do
for j:=1 to n do
A[i,j]:=cost[i,j];
for k:=1 to n do
for i:=1 to n do
for j:=1 to n do
A[i,j]:=min(A[i,j],A[i,k]+A[k,j]);
}
Analysis:-
The time needed by the above procedure AllPaths is easy to
compute since the looping is independent of the values in the matrix A.
The overall time complexity can be stated as (n3).

Q) Write minimum-cost binary search tree algorithm and explain


the same.
Q) Write an algorithm to find a minimum cost optimal binary
search tree?
Q) Describe about Optimal binary search trees.
Q) Write an algorithm that finds a minimum cost binary search
tree.
Q) Obtain the dynamic programming recurrence relations for
optimal binary search tree problem and hence explain how the
solution can be obtained.
Q) Define the optimal binary search tree problem and explain
how dynamic programming method can be used to solve it.
Q) Show how to compute the cost of binary search tree?
Q) Obtain the computing time of the algorithm for optimal BST.
Ans:-
Optimal Binary Search Trees(*):-
In general, different identifiers are searched with different
frequencies (or probabilities). Assume the given set of identifiers is {a 1,a2,
….,an} with a1<a2<….<an.
 Let P( i ) be the probability with which we search for a i.
 Cost of Successful search = Pi*level(ai).
 Let q( i ) be the probability that the identifier x (which doest not
exist) being searched for is such that a i<x<ai+1 (Assume a0 = - and
an+1= +).
 Then is the probability of an unsuccessful search.
 Circle represents Internal nodes.
 Rectangle represents external node.
 The fig. below represents an example binary search trees.
10

8 15

6 9 c4 20

c0 c1 c2 c3 c5 c6

Computing Cost:-
 Each and every successful search ends at any one of the internal
nodes.
 The no. of comparisons needed to locate identifier at level ‘i’ will
be ‘ i ‘ .
 The total cost for successful search of all the keys will be

(1)

 An unsuccessful search always terminates at leaf nodes.


 Each and every node in a BST must contain two children. If any
child is missing then they are filled with external nodes
represented by a rectangle.
 If there are n internal nodes then we have ‘n+1’ external nodes
represented as e0 to en.
 The cost of an unsuccessful search of an element at level i with a
probability qi will be qi*(level(Ei)-1).
 The total cost for unsuccessful search for all the elements will be

 ( 2)

 From (1) and (2) the total cost for search in a BST will be

+  (3)

Example:- Calculate the cost of binary search trees given below defined
on the set of identifiers (a1 a2 a3) = (do,if,while) and p(i)=q(i)=1/7  i.

a)
while if
b)

if
do while

do

whil C)
e

do

if
Solution:-
Tree (a) :

Cost = +

=
=15/7 2.1
Tree (b) :

Cost = +

=
=13/7 1.85
Tree (c) :

Cost = +

=
=15/7 2.1

Problem:- Calculate the cost of binary search trees given below defined
on the set of identifiers (a 1 a2 a3) = (do, if, while) and (p 1 p2 p3)
=(0.5,0.1,0.05) and (q0 q1 q2 q3) = (0.15,0.1,0.05,0.05) and a1  a2  a3.

Application of Dynamic Programming for obtaining an Optimal


BST: -

 Unsuccessful searches always terminate at external nodes.


 The identifiers which are not present in BST can be partitioned into
(n+1) equivalence classes e0 e1 e2…….en
 Example:-
 Class e0 contains all identifiers ‘ x’ such that x< a 1.
 Class ei contains all identifiers ‘ x’ such that a i<x<ai+1.
 Class en contains all identifiers ‘ x’ such that x> a n.
To apply dynamic programming to the problem of obtaining a BST we
need to view the construction of such a tree as a result of sequence of
decisions. A Possible approach to this would be to make a decision as to
which of the ai’s should be assigned to the root node of the tree.
If we choose ak as the root, then it is clear that the internal nodes a 1 to
ak-1 and the external node classes e0 to ek-1 will lie in the left sub tree ’ l ‘
of the root and the remaining nodes will be in the right sub tree ‘r’.

Fig.

From Eq.(3)

Cost (l) =+ (4)

Cost(r)= + (5)

 The cost of the total tree would be p(k)+cost(l)+cost(r)

 Using w(i,j) to represent the sum q(i)+ , we obtain the

expected cost of the search tree to be

P(k)+cost(l)+cost(r)+w(0,k-1)+w(k,n)(6)

The tree is said to be optimal if eq.(6) is minimum. Hence cost(l)


must be minimum over all BST’s containing a 1,a2,….ak-1 and e0,e1,…
ek-1. similarly cost(r) must be minimum.
 Assume c(i,j) to represent the cost of an OBST t ij containing ai+1,
……,aj and Ei…..Ej, then for the tree to be optimal,
 Cost(l)=c(0,k-1) and cost(r)=c(k,n) and k must be chosen such
that p(k)+c(0,k-1)+c(k,n)+w(0,k-1)+w(k,n) is minimum.
 Therefore,
c(0,n)=min{c(0,k-1)+c(k,n)+p(k)+w(0,k-1)+w(k,n)} where
1kn.
 From the above equation

C(i,j)=min{c(i,k-1)+c(k,j)+p(k)+w(i,k-1)+w(k,j)} where i<kj

 c(i,j)=min{c(i,k-1)+c(k,j)}+w(i,j) where i<kj  (7)

w(i,j)=p(j)+q(j)+w(i,j-1) (8)

Procedure to solve the problem:-


1. c(i,i)=0 and w(i,i)=q(i), oin
2. Eq.(7) is used to solve c(0,n) by first computing all c(i,j) such that j-
i=1
3. Next we compute aa c(i,j) such that j-i=2 and so on.
4. During the computation record the root r(i,j) of each tree t ij.
5. r(i,j) is the value of k that minimizes Eq.(7).

Note:- The matrix template for n=4 and the values of w, c, r is

00 11 22 33 44

01 12 23 34

02 13 24

03 14

04

Problem:-
Let n=4 and (a1 a2 a3 a4) = (do, if, int, while) and p(1:4)=(3,3,1,1)
and q(0:4) = (2,3,1,1,1)
Solution:-
Step 1:- Initially w(i,i)=q(i) c(i,i)=0 r(i,i)=0. 0≤i≤4
The output after executing first step would be

0 1 2 3 4
w00= w11= w33= w44=
W22=
2 3 1 1
0 1
c00= c11= c33= c44=
c22=0
0 0 0 0
r22=0
r00=0 r11=0 r33=0 r44=0
Step 2:-
w(0,1) =
p(1)+q(1)+w(0,0)
= 3+3+2 = 8
c(0,1) = min { c(0,0)+c(1,1)}+w(0,1)
0<k1
= min{0+0}+8 = 8
r(0,1) = 1 //since k value=1.

w(1,2) = p(2)+q(2)+w(1,1)
= 3+1+3 = 7
c(1,2) = min { c(1,1)+c(2,2)}+w(1,2)
1<k2
= min{0+0}+7 = 7
r(1,2) = 2 //since k value=2.

w(2,3) = p(3)+q(3)+w(2,2)
= 1+1+1 = 3
c(2,3) = min { c(2,2)+c(3,3)}+w(2,3)
2<k3
= min{0+0}+3 = 3
r(2,3) = 3 //since k value=3.

w(3,4) = p(1)+q(1)+w(0,0)
= 1+1+1 = 3
c(3,4) = min { c(3,3)+c(4,4)}+w(3,4)
3<k4
= min{0+0}+3 = 3
r(3,4) = 4 //since k value=4.

The revised matrix would be

0 1 2 3 4
w00= w11= w33= w44=
W22=
2 3 1 1
1
c00= c11= c33= c44=
0 c22=0
0 0 0 0
r22=0
r00=0 r11=0 r33=0 r44=0

w01= w12= w34=


w23=
8 7 3
3
c01= c12= c34=
1 c23=3
Step 3:- 8 7 3
r23=3
r01=1 r12=2 r34=4
w(0,2) =
p(2)+q(2)+w(0,1)
= 3+1+8 = 12
c(0,2) = min { c(0,0)+c(1,2),
0<k2 c(0,1)+c(2,2)} + w(0,2)
= min { 0+7, 8+0}+12 = 7+12 = 19
r(0,2) = 1 //since minimum value 7 is the result for k value=1.

w(1,3) = p(3)+q(3)+w(1,2)
= 1+1+7 = 9
c(1,3) = min { c(1,1)+c(2,3),
1<k3 c(1,2)+c(3,3)} + w(1,3)
= min { 0+3, 7+0}+9 = 3+9 = 12
r(1,3) = 2 //since minimum value 3 is the result for k value=2

w(2,4) = p(4)+q(4)+w(2,3)
= 1+1+3 = 5
c(2,4) = min { c(2,2)+c(3,4),
0<k2 c(2,3)+c(4,4)} + w(2,4)
= min { 0+3, 3+0}+5 = 3+5 = 8
r(0,2) = 3 //since minimum value 3 is the result for k value=3

The revised matrix would be

0 1 2 3 4
w33= w44=
W22=
w00=2 w11=3 1 1
1
c00=0 c11=0 c33= c44=
0 c22=0
r00=0 r11=0 0 0
r22=0
r33=0 r44=0

w34=
w23=
w01=8 w12=7 3
3
c01=8 c12=7 c34=
1 c23=3
r01=1 r12=2 3
r23=3
r34=4

w02=1
w13=9 w24=
2
c13=1 5
c02=1
2 2 c24=8
9
r13=2 r24=3
r02=1
Step 4:-

w(0,3) = p(3)+q(3)+w(0,2)
= 1+1+12 = 14
c(0,3) = min { c(0,0)+c(1,3),
0<k3 c(0,1)+c(2,3)
c(0,2)+c(3,3)} + w(0,3)
= min { 0+12, 8+3, 19+0 }+14 = min{12,11,19}+14 = 11+14
= 25
r(0,3) = 2 //since minimum value 11 is the result for k value=2

w(1,4) = p(4)+q(4)+w(1,3)
= 1+1+9 = 11

c(1,4) = min { c(1,1)+c(2,4),


1<k4 c(1,2)+c(3,4)
c(1,3)+c(4,4)} + w(1,4)
= min { 0+8, 7+3, 12+0 }+11 = min{8,11,12}+11 = 8+11 = 19

r(1.4) = 2 //since minimum value 8 is the result for k value=2

The revised matrix would be

0 1 2 3 4
w33= w44=
W22=
w00=2 w11=3 1 1
1
c00=0 c11=0 c33= c44=
0 c22=0
r00=0 r11=0 0 0
r22=0
r33=0 r44=0

w34=
w23=
w01=8 w12=7 3
3
c01=8 c12=7 c34=
1 c23=3
r01=1 r12=2 3
r23=3
r34=4

w02=1
w13=9 w24=
2
c13=1 5
c02=1
2 2 c24=8
9
r13=2 r24=3
r02=1

w03=1 w14=1
Step 5:- 4 1
c03=2 c14=1
3
w(0,4) = 5 9
r03=2 r14=2

p(4)+q(4)+w(0,3)
= 1+1+14 = 16
c(0,4) = min { c(0,0)+c(1,4),
0<k4 c(0,1)+c(2,4)
c(0,2)+c(3,4)
c(0,3)+c(4,4)} + w(0,4)
= min { 0+19, 8+8, 19+3,25+0 }+16 = min{19,16,22,25}+16
= 16+16 = 32
r(0,4) = 2 //since minimum value 16 is the result for k value=2

The final revised matrix would be

0 1 2 3 4
w33= w44=
W22=
w00=2 w11=3 1 1
1
c00=0 c11=0 c33= c44=
0 c22=0
r00=0 r11=0 0 0
r22=0
r33=0 r44=0

w34=
w23=
w01=8 w12=7 3
3
c01=8 c12=7 c34=
1 c23=3
r01=1 r12=2 3
r23=3
r34=4

w02=1
w13=9 w24=
2
c13=1 5
c02=1
2 2 c24=8
9
r13=2 r24=3
r02=1

w03=1 w14=1
4 1
c03=2 c14=1
3
5 9
From the r03=2 r14=2 table we
can see that c(0,4) = 32
is the minimum w04=1 cost of a
binary search 6 tree for
c04=3
(a1,a2,a3,a4). 4 The root of
2
tree t04 is a2. r04=2 Hence the
left sub tree is t01 and the
right sub tree t24. Tree t01
has root a1 and sub trees t00 and t11. The root for tree t24 is identified from
the value of r24 = 3 and hence root is a 3 and its left sub tree is t 22 and its
right sub tree is t34.
The resultant tree is as shown below.

Fig.
Exercise:-
Q) Solve the following instance of the optimal BST problem and
determine the minimum cost of the tree. n=4;
a(1:4)=(end,goto,print,stop) ;
p(1:4)= ;

q(1:4)= . Also construct the tree.


Q) Construct the optimal BST with the identifier set
a(1:4)=(end,goto,print,stop) with p(1)=0.05, p(2)=0.2, p(3)=0.1,
p(4)=0.05, q(0)=0.2,q(1)=0.1,q(2)=0.2,q(3)=0.05,q(4)=0.05. Also
compute the cost of the tree.

Procedure/Algorithm to find the minimum cost BST:-


Algorithm OBST(p,q,n)
//Given n distinct identifiers a1<a2<…an and probabilities
//p[i], 1≤i≤n, and q[i], 0≤i≤n , this algorithm computes
//the cost c[i,j] of optimal binary search trees t ij for identifiers
//ai+1,….aj. It also computes r[i,j], the root of t ij. W[i,j] is the weight of tij.
{
for i:=0 to n-1 do
{
//Initialize.
w[i,i]:=q[i]; r[i,i]:=0;c[i,i]:=0.0;
//Optimal trees with one node
w[i,i+1]:=q[i]+q[i+1]+p[i+1];
r[i,i+1]:=i+1;
c[i,i+1]:=q[i]+q[i+1]+p[i+1];
}
w[n,n]:=q[n];r[n,n]:=0;c[n,n]:=0.0;
for m:=2 to n do //Find optimal trees with m nodes.
for i:=0 to n-m do
{
j:=i+m;
w[i,j]:=w[i,j-1]+p[j]+q[j];
k:=find(c,r,i,j);
c[i,j]:=w[i,j]+c[i,k-1]+c[k,j];
r[i,j]:=k;
}
write(c[0,n],w[0,n],r[0,n]);
}

Algorithm find(c,r,i,j)
{
min:=;
for m:=r[i,j-1] to r[i+1,j] do
if(c[i,m-1]+c[m,j])<min then
{
min:=c[i,m-1]+c[m,j];
l:=m;
}
return l;
}

Analysis:-
The algorithm described above requires us to compute c(i,j) for (j-
i)=1,2,…n. In that order when j-i=m there are n-m+1 c(i,j)’s to be
computed. This computation can be done in O(m) time. The total time for
all c(i,j)’s with j-i=m would be O(nm-m 2). Therefore the total time to
calculate c’s and r’s would be O(n3).

0/1 Knap sack Problem:-

Q) Briefly explain the 0/1 knapsack problem.


Q) Discuss the features of 0/1 knapsack problem.
Q) Explain the method of solving 0/1 knapsack problem by the
dynamic programming method. Illustrate the method with an
example.
Q) With the help of a suitable example illustrate the dynamic
programming formulation of 0/1 knapsack problem and explain
the method which employs the merging of tuples, along with
purging of tuples as per the dominance rules.

Ans:-
Problem:-
It is an extension of the greedy knapsack with an additional
constraint that of an object has to be either completely placed into the
knapsack or not at all. The objective here is to optimize the profit.

Solution:-

Dominance Rule:-
If si+1 contains two pairs (pj, wj) and (pk, wk) with the property that pj
 pk and wj  wk then the pair (pj, wj) can be discarded.
Example:- in the pairs (3,5) and (5,4) the pair (3,5) can be discarded
(purged).
A solution to the knapsack problem can be obtained by making a
sequence of decisions on the variables x1,x2,…xn.
 A decision on variable xi involves determining which of the values 0
or 1 is to be assigned to it.
 Decision on xi are made in the order xn,xn-1,…x1.
 Following a decision on xn we can be in any one of two possible
states. Case1:- xn=0. The capacity in the knapsack and the profit
accrued does not change i.e., capacity=m And profit=0.
Case2:- xn=1. The capacity in the knapsack becomes m-w n and the
profit accrued is pn.
Let fj(y) be the value of an optimal solution to KNAP (1, j, y). From the
principle of optimality
fn(m)=max{fn-1(m),fn-1(m-wn)+pn}  (1)
For arbitrary fi(y), i>0, eq(1) can be generalized as
fi(y)=max{fi-1(y),fi-1(y-wi)+pi}  (2)
Eq.(2) can be solved for fn(m) by beginning with the knowledge f 0(y)=0
y, fI(y)=- y<0.
Then f1,f2,…fn can be successively computed using eq(2).

 fi(y) is an ascending step function , i.e., there are a finite number of


y’s 0=y1<y2<…<yk such that fi(y1)<fi(y2)<…fi(yk).
 fi(y) = -, y<y1
 fi(y) = f(yk), yyk and
 fi(y) = fi(yj), yj  y < yj+1. So, we need to compute fi(yj), 1  j  k.

i
fi(y) = S = { ( f(yj),yj)/ 1  j  k} (3)
i
Each member of S is an ordered pair (P,W) where P= f i(yj) and W=
yj
0 i+1 i
S ={ (0,0)}. We compute S from S using
i
S = { (P,W)/(p-pi,w-wi) Si}
1

Procedure:-
0
1. Initially S ={ (0,0)}.
0
2. Compute S . 1

i i i+1 i+1
3. Merge S and S to obtain S
1 and compute S 1

4. Repeat step 3 for all values of n.


Problem:- Consider the knapsack instance n=3, (w1,w2,w3)=(2,3,4),
(p1,p2,p3)=(1,2,5) and m=6.

Sol:-
0
S ={ (0,0)}.
S 0 = {(1,2) }
1

1
S ={ (0,0)(1,2) }
1
S = { (2,3) (3,5) }
1

2
S ={ (0,0)(1,2) (2,3) (3,5)}
S 2 = { (5,4) (6,6) (7,7) (8,9) }
1

3
S ={ (0,0)(1,2) (2,3) (5,4) (6,6) (7,7) (8,9) }

3
With m=6, the value of f3(6) is given by the tuple (6,6) in S .
3 2
i.e., (6,6) S and (6,6)  S so set x3=1
2
The pair (6,6) came from the pair (6-p3,6-w3) = (1,2). (1,2)  S and (1,2)
1
S
Therefore set x2=0.
0
Since (1,2)  S set x1=1.
The optimal solution for the above problem would be (x1,x2,x3)=(1,0,1).

Exercises:-
i
1. Generate the sets S of jump points in 0i4 pertaining to the 0/1
knapsack problem where w(1:4)=(10,15,6,9) and p(1:4) = (2,5,8,1).
2. Explain the method of applying dynamic programming to solve 0/1
knapsack problem and illustrate the method for the example
instance of n=4, w(1:4) = (10,15,6,9) , p(1:4) = (2,5,8,1) and
m=25.
Si
3. Generate the sets of jump points in 0i4 pertaining to the 0/1
knapsack problem where n=4 w(1:4)=(6,9,10,1) and p(1:4) =
(8,1,2,5).

You might also like