Ada 2018
Ada 2018
May 2018
Paper Code:-CSE-306-F
each Section.
Note:Atempt five questions in all, selecting one question from
Cuestion No. 1 is commpulsory. All questions carry equal marks.
constant c.so that F(n) Sc* g(n). As F(n) =2n +2 and gtn) = né then we find e for n = I then
F(n) =2n+2
2(1)+2
Fn)=4
and gln)=n* = (1) =I
i.e. F(n) > g(n)
If n =2then,
F(n) 2(2) +2 6
gn)=(2) =
4
i.e F(n)> cg(n)
Ifn= 3,then
F(n) 2(3) +2 8
gn) 3 ) - 9
i.e F(n)< g{n) is true
Hence we can conclude that for n> 2, we obtain
F) gtn)
Thus always upper bound of existing time is obtained by big Oh notation.
Analysts And Destgn Of Algonthm
is
notation used
is u se to
represn
66 s2'.
This
denoted by otation we ca enote shortest amouy
notation is
Omega Notation: Omega nota
time. Usingomega
the lower bound of algorithms running is bounded below by some
belw
Consider =
Then if n=0
2(0) +5 =5
Fn)=
g)= 7(0) =0
i.e F(n)> gn)
But if n 1
7
Fn) 2(1) +5
=
=
gn) 7(1) =7
=
Fn) =
7(3) =
21
gn)
i.e F(n)> g(n)
T h u s For n>3 we fet F(n) > c " g(n)
Where n22
Similarly F(n) = 2n+8
gtn) = 7n
approach. decisions are made from the given solution domain. As being greedy, the closest
solution that seems to provide an optimum solution is chosen.
Greedy algorithms try to find a localized optimum solution. which may eventually lead to globally
optimized solutions. However. generally greedy algorithms do not provide globally optimized
solutions.
Counting Coins : This problem is to count to a desired value by choosing the least
possible coins and the greedy approach forces the algorithm to pick the largest possible coin. If
weare provided coins of 1, 2, 5 and 10 and we are asked to count 18 then the greedy
procedure will be:
(1) Select one 10 coin, theremaining count is 8
(2) Then select one' 5 coin, the remaining count is 3
'.
(3) Then select one' 2 coin, the remaining count is 1
4 ) And finally, the selection of one ' l coins solves the problem
Though, it seems to be working fine, for this count we need to pick only 4 coins. But if
we slightly change the problem then the same approach may not be able to produce the same
optimum result.
For the currency system, where we have coins of 1,7, 10 value, counting coins for value
18 will be absolutely optimum but for count like 15, it may use more coins than necessary. For
example,the greedy approach will use 10 +1+1+1 + 1+1, total 6 coins. Whereasthe same
problem could be solved by using only 3 coins (7+7+1)
Hence, we may conclude that the greedy approach picks an immediate optimized solution
and may fail where global optimization is a major concern.
Examples: Most networking algorithms use the greedy approach. Here is a list offew
ofthem
(1) Travelling Salesman Problem
(2) Prim's Minimal Spanning Tree Algorithm
(3) Kruskal's Minimal Spanning Tree Algorithm
(4) Dijkstra's Minimal Spanning Tree Algorithm
(5) Graph-Map Coloring
(6) Graph - Vertex Cover
Ans.(c)Dominance rule or Purging rule: If S*' contains (P, W) and (P. W):
thesetwo pairs such that:
PP. and W W, then (P, W) can beeliminated. This purging rule is also called as
dominance rule. In purging rule basically the dominated
tuples gets purged. In short, remove the
pair with less profit and more weight
Section A
Q.2.(a) What are disjoint sets. Also write Union and find
sets. algorithm for disjoint
Ans. Disjoint sets: Two or more sets are said to be (10)
common. For disjoint
example, {1,2,3} and {4,5,6} are disjoint sets, while sets if they have no element in
are not. The elements of a
set are {1,2,3} and {2,4,5,6}
elements of the set {a, b, c} are the letters, numbers, or any other objects. For example, the
letters a, b, and c.
Formally, two sets A and B are disjoint, if their
AOB 0 . This definition extends to intersection is the empty set, that is it
disjoint or mutually disjoint if any two distinctcollection
any
sets in the
of sets. A collection of
sets is
pairwise
an index set, and
for each i in collection are
disjoint. Formally, let I be
1, let A, be a set. Then, the family of sets
disjoint if for anyi and i in I with {4, : iel} is pairwise
i* j,4,B, For example, the collection of sets
=9,
().(2),(3), ... is pairwise disjoint. If t4,)isa
two sets), then
clearly its pairwise disjoint collection
intersection
is empty. (containing at least
A union-find
data structure:
algorithm is an algorithm that performs two
usetul operations on such a
Find: Determine which subset a
if two elements are in the
particular element is in. This can be used for
Union: Join two
same subset. detemining
subsets into a single subset.
procedure Make-Set (x)
I.size[x] T
2.parent{x] x
end.
procedure UNION(a, b) { with weight balancing
a and b are roots of two distinct trees in the forest.
B Tech 0 Semmester, Sohed papers. Alay-2918 69
Makes the root of the smaller tree child
a of the root of the larger tree.
I.ifsizelal sizelb] then a b
2parent|b] a
Fig.:(1)
2) Conquer : Recursively sort the two sub arrays.
(3) Combine : Combine all the sorted elements in a group to form a list of sorted elements.
of the positions of array elements. but
In merge sort the division array is based on
quick sort this divisionis based on actual value of the element. Consider an array A[] where i
ranging from 0 to n- I then we can formulize the division of array elements as
A[0...A[m-1].A[m].A[m-1 ]..An-1|
Cn) = C (n/2) n
time required
n
n items
n-1
n- 1 items
n-2items
n-3items
3
1
Fig.: (2)
Analysis And Destgn OfAoni
72 rium
We can write it as
C(n) =
C(n 1) +n
+2 +
1
+(n - 1)+ (n -2)
or +...
Cn) =
n
But as we know
1+2+3+... +n nn+)-n?
2
C(n) e(n*).
The time complexity of worst case of quick sort is e(n).
n (n) - (n- 1)* (n -
1) =
2C (n- 1) +(2n -
I)
nCav (n)= (n + 1) * C . (n - 1)+ (2n 1) << (n + 1) * C a ( n - 1 ) + 2 n
If we assume
C"
A(m)< A(n-1)+
(n+1) Adding and cancelling these
A(n-1)<A(n-2)+ equations we get,
A1)« A(0)+
2
Hence average time
case
complexity of quick sort is e(n log n).
Average case (random array): Let, C,(n) denotes the average time
(A[I.. n]). of quick sor
The recurrence relation for random
input array is
C(n)= C(0) +
C(n -
1)+n
or C(n) =
C)*C(n -2)+n
or C(n) C(2) +C(n - 3) +n
or C(n) =
C(3) +C(n -4) +n
B. Tech 6h Semester Solhed paers, May-20 8 73
Timecomplexity ofquicksort
Best case Average case Worst case
O(n log.n) e(n log.n) e(n*)
Q.3.(b)State Matrix chain multiplication problem. How to solve this problem
with Dynamie programming? xplain. (10)
Ans. Matrix chain multiplication is an optimization problem that can be solved using
dynamic programming. Given a sequence of matrices, the most efficient way to
we want to find
but
multiply these matrices together. The problem is not actually to perform the multiplications,
merely to decide in which order to perform the multiplications.
Dynamic Programming Algorithm :To begin, let us assume that all we really want to
know is the minimum cost, or minimum number ofarithmetic operations, needed to multiply out
the matrices. If we are only multiplying two matrices, there is only one way to multiply them, so
the minimum cost is the cost of doing this.In general, we can find the minimum cost using the
following recursive algorithm:
- Take the sequence of matrices and separate it into two subsequyences.
- Find the minimum cost of multiplying out each subsequence.
- Add these costs together, and add in the cost of multiplying the two result matrices.
- Do this for each possible position at which the sequence of matrices can be split, and
take the minimum over all of them.
For example, if we have four matrices ABCD, we compute the cost required to find
each of(AX BCD), (AB)(CD), and (4BCXD), making recursive calls tofind the minimum cost to
compute 4BC, AB, CD, and BCD. We then choose the best one. Better still, this yields not only
the minimum cost, but also demonstrates the best way of doing the multiplication: group it the
the lowest total cost and do the same for each factor.
way that yields
Unfortunately, if we implement this algorithm we discover that it is just as slow as the
naive way of trying all permutations! What went wrong? The answer is that we're doing a lot of
redundant work. For example, above we made arecursive call to find the bestcost for computing
both 4BC and AB. But finding the best cost for computing ABC alsó requires finding the best
AnalysasAnd Design OfAlgor
necessary repeti
ost for 4B.As the recursion grows deeper. more and
OCcurs.
brings
It can be shown that this simple trick the runtime down to O(n°') from O2
Casonable.
ncn 1smore than efficient enough for real applications. This is top-down dynamic programiming
Section B
Q.4. Explain the concept of minimum spanning trees. Solve the following graph
using prime's algorithm. Also analyse its complexity. (20)
O 28
10
14 16
6
24,
18/2
22 4
Ans. Minimum spanning trees: A minimum
spanning tree (MST) or minimum weight
spanning tree is then a spanning tree with weight less than or equal to the weight of every other
spanning tree. More generally, any undirected graph has a minimum spanning forest, which is a
union of minimum spanning trees for its connected
components.
Practical applications based on minimal
spanning trees include:
-Taxonomy, one of the earliest motivating applications.
-Cluster analysis: clustering points in the plane,
single-linkage
clustering, and clustering gene expression data. Constructing trees forclustering, graph-theoretic
networks. broadcasting in computer
-
(28
10/ 10/
14/ 14
16
(T (3)
2
2524
1812 12
22
4)
(3) (b)
Fig.(1): Graph and its miniimum cost spanning tree
Let us obtain a pseudocode algorithm to find a minimum-cost spanning tree using prim s
method. The algorithm will start with a tree that includes only a minimum-cost edge of G. Then.
edges are added to this tree one by one. The next edge (i, ) to be added is such that i is a vertex
already included in the tree,j is a vertex not yet included, and the cost of (i, ) cost[i. i]. -is
minimum among all edges (k, ) such that vertex k is in the tree and vertex lis not in the tree. To
determine this edge (i, i) efficiently, we associate with each vertex j not yet included in the tree
a value near []. The value near ] is a vertex in the tree such that cost[i. near[i]] is minimum
among all choices for near [i]. We define near [i]=0 for all vertices that are already in the
tree. The next edge to include is defined by the vertex i such that near[]=0 (G not already in the
tree) and cost[/, near[]] is minimum.
In function Prim, line 9 selects a minimum-cost edge. Lines 10 to 15 initialize the variables
so as to represent a tree comprising only the edge (k. ). In the for loop of line 16 the remainder
of the spanning tree is built ug edge by edge. Lines 18 and 19 select (j. near[l) as the next edge
to include. Lines 23 to 25 update near[ ].
The time required by algorithm Prim is Or*), where n is the number of vertices in the
graph G. To see this, note that line 9 takes O(EI) time and line 10 takes (1) time. To see this.
note that line 9 takes O(E|) time and line 10 takes (I) time. The for loop ofline 12 takes en)
time. Lines 18 and 19 and the for loop of line 23 require Oun) time. So. each iteration ofthe for
loop of line 16 takes On) time. The total time for the for loop of line 16 is therefore O )
Hence, Prim runs in O(n) time.
Ifwe store the nodes not yet included in the tree as a red-black tree. lines 18 and 19 take
O(log n) time. Note that a red-black tree supports the following operations in Qlog n) time:
insert, delete (an arbitrary element), find-min, and search (for an arbitrary element). The overall
frequency is O(E|). Updating in lines 24 and 25 also takes O(log n) time (since an update can be
done using a delete and an insertion into
the red-black tree). Thus the overall run time is On-
E) log n).
The algorithm can be speeded a bit by making the observation that a minimum-cost
spanning tree includes for each vertex v aminimum-cost edge incident to v. To see this, suppose
76 Analysis And Design OfAlgoritim
tis minimum-cost spanning tree for G (V, E). Let r be ay vertex In i. Let (r, w) be an
a =
/Vertex l is initially in t.
12 near[1]=0;
13-16 for i= 1 to n - I do
17 {// Find n - 1 edges for t.
1), we get
P(1) +q(1)+w(0, 0) -8
=
c(3, 3)}
3 =
r(2,3)=3
w(3,4) P(4) +q(4)+
w(3, 3) =3
c3,4) w{3,4) + min {c(3, 3) +
c(4, 4)} 3 =
r(3,4)=4
Knowing w(i, it )and ci, it ),
0cic4,
+2). 0< i<3. This process can be repeated until we can use to compute w(i. i+2), and r{i.
w{(0, 4). c(0, 4), and
he of Fie.( 1) shows the results of this computation. r(0, 4) are obtained. The
The box in row i and
values of w(i. i t i). cli.jt ) and r .i + i) ctively. ne column shows
8 Toch Semester. Solhed popers. May-2018 77
4
Wo2|
Coo0
Wi3w wn
C 0C20 C 0 C4 0
a20
WoI8
Wi 7| W23w43
Co 8
2 2 rzy 3 u4
a 1 4 W14 I1|
Co3 25|C1419
Fay 2| r14
W0416
c
Fig.(): Computation of c(0,4), w(0,4), and r(0,4)
The computation is carried out by row from row 0 to row 4. From the table
we see that
c(0. 4) 32 is the minimum cost of a binary search tree for (a, a,, d,, a,). The root
a.. Hence, the left subtree is and the right subtree
of tree t is
Tree , has root a, and subtrees and
Tree 1., has root a; its left subtree is and its right subtree / Thus, with the data in the
table it is possible to reconstruct Fig.(2)shows
do int
while
Fig.(2):0ptimal search tree
Complexity: Let us examine the complexity of this procedure to evaluate the c's and
r's. The evaluation procedure described in the above
problem requires us to compute c(i, i) for
( - )- 1, 2. n in that order. When i i m, there are n m + l e(i.,
..., - =
3 4 6 7 8
we will not
get a valid coloring for all wh
Sa we orune
(abandon) this path and consider an alternative of the vertices
next choice for the coloring of vertex b, path
which is to color it with
near where we are.
We try the
h-2 is a valid partial color 2. The
coloring so we continue to the next level of search coloring a I and
=
he vertex from wlichthe sequence starts must be ended at the end. This check on the sequence
Smust be made in
polynomial time n. Now if there is a Hamiltonian cycle in the graph lhe
algorithn will outpul "yes". Similarly if we get output of' algorithm as "yes" then we could guess
thecyclein Gwith every verlex appearing exactly once and the first visited vertex getting visited
at the last. That means A non-deterministically accepts the language HAMILIONIAN CYCLE
It is therefore proved that HAMILTONIAN CYCLE is in NP.
Q.7. Solve the following problem by usingleast cost Branch & Bound method:
Knapsack instance n = 4, p(1: 4) = (10, 10, 12, 18) and weight w (I : 4) = (2, 4.
6,9)& max. capacily im= 15. (20)
Ans. The search begins with the root as the E-node. For this code, node I of Fig.(1), we
have e ( 1 ) - 38 and u(1)- 32
-38
-32
-38 -32
-32
3
-38 -36
-32 O22 5
-38 38
-32 -38
6
-38 -20
38
8
-20
9
Upper number = c
Lower number = u
computation of i(1) and c() is done as follows. The bound ( ) has a value
The
UBound (0, 0, 0, 15). UBound scans through the objects fron left to
these objects into the knapsack until the first object that doesn't fit right starting from/. it adds
is encountered. At this time
the negation of the total profit of all the objects in the
Knapsack plus cw is returned. In I unction
Analysis And Design OfAlgontis
80
incremented by 2. 4, a n d .
1.2, and 3, c gets nd 6
of zero. For i
=
r, 1, X, 1,x, 0 , r, 1, and x,
= =
1.
Section - D
(c) Reducibility.
Ans. (a) Satisfiability problem
CNF - SAT problem: This problem is based
Boolean formula. The Boolea
on
o r l a has various Boolean operalions Such as OR(+), AND (.) and NOT. There are som
ncHations such as (means Impies) and > (means if and only in
A Boolean formula
Conjuctive Normal Form (CNF) ifit
is in
is formed
Subexpressions.
called clauses
These subexpressions are
as collection
Forexample: (a *b*d*g) (c e)(b d+f.h)(a+ c+e +h)
81
R Tech Semester Sohed papers. May -2018
Theorem:CNF-SAT is in NP complete.
Proof Let S be the Boolean formula for which we can construct simple n o n
a
is in NP-complete.
formula S
2)A JSAT problem: A 3SAT problem is a problem which takes a Boolean
in CNF form with each clause having exactly three literals and check whether S is satisfied or
not. [Note that CNF means each literal is ORed to form a clause, and each clause is ANDed to
construct a simple non-deterministic algorithm which can guess an assignment of Boolean values
to S. Ifthe S evaulated as 1 then S is satisfied. Thus we can prove that 3SAT is in NP-complete.
fig(1
Various types ofreductions are
CIRCUIT-SAT
CNF SAT
3 SAT
VERTEX COVER
KNAPSACK
TSP
All Problemns
problems SAT Problem K
in NP X
Solver
No
Cook's Theorem Polynomial Polynomial
Reduction from algorithm means
SAT to X P- NP