0% found this document useful (0 votes)
5 views20 pages

Ada 2018

The document is an examination paper on the analysis and design of algorithms, containing questions on asymptotic notation, greedy algorithms, dominance rule, and classes P and NP. It provides definitions, examples, and explanations of key concepts in algorithm analysis, including time and space complexity, as well as specific algorithms like Quick Sort. The paper emphasizes the importance of understanding these concepts for solving computational problems efficiently.

Uploaded by

rajshekhar912879
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views20 pages

Ada 2018

The document is an examination paper on the analysis and design of algorithms, containing questions on asymptotic notation, greedy algorithms, dominance rule, and classes P and NP. It provides definitions, examples, and explanations of key concepts in algorithm analysis, including time and space complexity, as well as specific algorithms like Quick Sort. The paper emphasizes the importance of understanding these concepts for solving computational problems efficiently.

Uploaded by

rajshekhar912879
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

ANALYSIS AND DESIGN OF ALGORITHMS

May 2018
Paper Code:-CSE-306-F

each Section.
Note:Atempt five questions in all, selecting one question from
Cuestion No. 1 is commpulsory. All questions carry equal marks.

Q.1. Write short note on the following (45-20)


(a) Asymptotie notation
(b) Greedy algorithms
(c) Dominance rule
(d) P & NP Class
shortest way to represent the
Ans. (a) Asymptotic Notation: Asymptotic notation is a
time complexity.
Various notations such as 2, O, and O are called asymptotic notations.
Big Oh Notation: The Big oh notation is denoted by 'O'. It is a method of representing
ofalgorithm's running time. Using big Oh notation we can give longest
amount
the upper bound
of time taken by the algorithm to complete.
Definition: Let F(u) and gtn) be two non-negative function. Let ng and constant c are
two integers such that no denotes some value of input and n > no Similarly c is some constant
such that c>0. We can write
F(n) sc*g(n)
then F() is big Oh of g(n). It is also denoted as F(n)e O(gtn). In other words F(n) is
less than g(n) is g(n) is multiple of some constant c.
Example: Consider function F(") = 2n+ 2 and gln) = n . Then we have to find some

constant c.so that F(n) Sc* g(n). As F(n) =2n +2 and gtn) = né then we find e for n = I then

F(n) =2n+2
2(1)+2
Fn)=4
and gln)=n* = (1) =I
i.e. F(n) > g(n)
If n =2then,
F(n) 2(2) +2 6
gn)=(2) =
4
i.e F(n)> cg(n)
Ifn= 3,then
F(n) 2(3) +2 8
gn) 3 ) - 9
i.e F(n)< g{n) is true
Hence we can conclude that for n> 2, we obtain
F) gtn)
Thus always upper bound of existing time is obtained by big Oh notation.
Analysts And Destgn Of Algonthm

is
notation used
is u se to
represn
66 s2'.
This
denoted by otation we ca enote shortest amouy
notation is
Omega Notation: Omega nota
time. Usingomega
the lower bound of algorithms running is bounded below by some
belw

of time taken by algorithm. be in 2 (g(m) ifF(m)


said to
Definition: A function F() is
such that
positive constant multiple of gtn)
2
For all n n,
F(n) 2c* g(n)
It is denoted as F(m) e 2 (g{).
Example: 7n
F(n) 2n+5 and gln)
=

Consider =

Then if n=0
2(0) +5 =5
Fn)=
g)= 7(0) =0
i.e F(n)> gn)
But if n 1
7
Fn) 2(1) +5
=
=

gn) 7(1) =7
=

i.e Fn)= gtn)


Ifn 3, then
+5 23
2(3)
=

Fn) =

7(3) =
21
gn)
i.e F(n)> g(n)
T h u s For n>3 we fet F(n) > c " g(n)

Itcan berepresented as 2nd+ 5 e 2(n)


Similarly any r° EQ(n)
e Notation : The theta notation is denoted by . By this method the running time is
between upper bound and lower bound.
Definition: Let F(n) ánd gtn) be two non-negative functions. There are two positive
constants namely c and c, such that
gn) c2 gn)
Then we can say that
F(n) e (gn)
Example: 1f F(n) = 2n + 8 and gln) = 5n

Where n22
Similarly F(n) = 2n+8
gtn) = 7n

i.e Sn<2n +8< 7n For n 2 2


Here c5 andc 7 with n 2 =

The theta notation is more precise with both


big Oh and omega notation.
Ans. (b) Greedy algorithms :An
algorithm is designed to achieve optimum solution
for given problem. greedy algorithm approach, decisions are
a In
domain. As being greedy, the closest solution that seems to made from the given solution
Greedy algorithms try to find a localized provide an optimum solution is choset.
glAn algorithm is designed to achieve optimum solution, which may eventually lead
optimum solution for a given In problem. greedy algori
B Tech 6 Semester, Soredpapers. May-20I8 67

approach. decisions are made from the given solution domain. As being greedy, the closest
solution that seems to provide an optimum solution is chosen.
Greedy algorithms try to find a localized optimum solution. which may eventually lead to globally
optimized solutions. However. generally greedy algorithms do not provide globally optimized
solutions.
Counting Coins : This problem is to count to a desired value by choosing the least
possible coins and the greedy approach forces the algorithm to pick the largest possible coin. If
weare provided coins of 1, 2, 5 and 10 and we are asked to count 18 then the greedy
procedure will be:
(1) Select one 10 coin, theremaining count is 8
(2) Then select one' 5 coin, the remaining count is 3
'.
(3) Then select one' 2 coin, the remaining count is 1
4 ) And finally, the selection of one ' l coins solves the problem

Though, it seems to be working fine, for this count we need to pick only 4 coins. But if
we slightly change the problem then the same approach may not be able to produce the same
optimum result.
For the currency system, where we have coins of 1,7, 10 value, counting coins for value
18 will be absolutely optimum but for count like 15, it may use more coins than necessary. For
example,the greedy approach will use 10 +1+1+1 + 1+1, total 6 coins. Whereasthe same
problem could be solved by using only 3 coins (7+7+1)
Hence, we may conclude that the greedy approach picks an immediate optimized solution
and may fail where global optimization is a major concern.
Examples: Most networking algorithms use the greedy approach. Here is a list offew
ofthem
(1) Travelling Salesman Problem
(2) Prim's Minimal Spanning Tree Algorithm
(3) Kruskal's Minimal Spanning Tree Algorithm
(4) Dijkstra's Minimal Spanning Tree Algorithm
(5) Graph-Map Coloring
(6) Graph - Vertex Cover

(7) Knapsack Problem


8) Job Scheduling Problem
There are lots of similar problems that uses the greedy approach to find an optimum
solution.

Ans.(c)Dominance rule or Purging rule: If S*' contains (P, W) and (P. W):
thesetwo pairs such that:
PP. and W W, then (P, W) can beeliminated. This purging rule is also called as
dominance rule. In purging rule basically the dominated
tuples gets purged. In short, remove the
pair with less profit and more weight

Ans.(d)P Class: The class P consists


of those problems that are solvable in polynomial
time, i.e. these problems can be solved in time O(nt) in worst-case,
where k is constant.
Analysis And Design OfAlPOri
rthme
These problems are called tractable, while others are called intractoa
intractable
superpolynomial.
Formally, an algorithm is polynomial time algorithm, ifthereexists a polynomial pin) s
that the algorithm can solve any instance of size n in a time Ofpin)).
Such
Problem requiring S2(") time to solve are essentially intractable for large n. Most
Mostknow
know
polynomial time algorithm run in time Orn) for fairly low value or k.
he advantages in considering the class of polynomial-time algorithms is that a
sonabie deterministic single processor model ofcomputation can be simulated on each other
with at most a
polynomial slow-d
NP-Class : The class NP consists of those problems that are verifiable in polynomial
time. NP is the class of decision
problems for which it is easy to check the correctness of a
claimed answer, with the aid of a little extra
information. Hence, we aren't asking for a way to
find a solution, but only to
verify that an alleged solution really is correct.
Every problem in this class can be solved in exponential time using exhaustive search.

Section A

Q.2.(a) What are disjoint sets. Also write Union and find
sets. algorithm for disjoint
Ans. Disjoint sets: Two or more sets are said to be (10)
common. For disjoint
example, {1,2,3} and {4,5,6} are disjoint sets, while sets if they have no element in
are not. The elements of a
set are {1,2,3} and {2,4,5,6}
elements of the set {a, b, c} are the letters, numbers, or any other objects. For example, the
letters a, b, and c.
Formally, two sets A and B are disjoint, if their
AOB 0 . This definition extends to intersection is the empty set, that is it
disjoint or mutually disjoint if any two distinctcollection
any
sets in the
of sets. A collection of
sets is
pairwise
an index set, and
for each i in collection are
disjoint. Formally, let I be
1, let A, be a set. Then, the family of sets
disjoint if for anyi and i in I with {4, : iel} is pairwise
i* j,4,B, For example, the collection of sets
=9,
().(2),(3), ... is pairwise disjoint. If t4,)isa
two sets), then
clearly its pairwise disjoint collection
intersection
is empty. (containing at least
A union-find
data structure:
algorithm is an algorithm that performs two
usetul operations on such a
Find: Determine which subset a
if two elements are in the
particular element is in. This can be used for
Union: Join two
same subset. detemining
subsets into a single subset.
procedure Make-Set (x)
I.size[x] T
2.parent{x] x
end.
procedure UNION(a, b) { with weight balancing
a and b are roots of two distinct trees in the forest.
B Tech 0 Semmester, Sohed papers. Alay-2918 69
Makes the root of the smaller tree child
a of the root of the larger tree.
I.ifsizelal sizelb] then a b
2parent|b] a

3. sizela] sizeal sizelb]


end
function FIND(x) {with path
compression}
Returns the root of the tree that contains node x.}
1. if parent{x] x then
2. parent[x] FIND(parent[x]))
3. return parent[x]
end.

Q.2.(b) Explain time & space complexity. (10)


Ans. Time complerity is a function describing the amount of time an algorithm takes in
terms of the amount of input to the algorithm. "Time" can mean the number of memory accesses
performed, the number of comparisons between integers, the number of times some inner loop is
executed, or some other natural unit related to the amount of real time the algorithm will take.
We try to keep this idea of time separate from "wall clock" time, since many factors unrelated to
the algorithm itself can affect the real time (like the
language used, type
of computing hardware
proficiency of the programmer, optimization in the compiler, etc.). It turns out that, if we chose
the units wisely, all of the other stuff doesn't matter and we can
get an independent measure of
the efficiency of the algorithm.
Space connplexity is a function describing the amount of memory (space) an algorithm
takes in terms of the amount of input to the algorithm. We often speak of "extra" memory
needed, not counting the memory needed to store the input itself. Again, we use natural (but
fixed-length) units to measure this. We can use bytes, but it's easier to use, say, number of
integers used, number of fixed-sized structures, etc. In the end, the function we come up with
will be independent of the actual number
of bytes needed to represent the unit. Space complexity
is sometimes ignored because the space used is minimal and/or obvious, but sometimes it
becomes
as important an issue as time.
For example, we might say "this
algorithm takes time," where n is the number of
items in the input. Or we might say "this algorithm takes constant extra
space," because the
amount of extra memory needed doesnt vary with the number of items
processed.
Q.3.(a) Explain Quick algorithm in detail. Analyse its complexity also.(10)
sort
Ans. Quick Sort : Quick sort is a sorting algorithm that uses the
divide and conquer
strategy. In this method division is dynamically carried out. The three
steps of quick sort are as
fellows
(0) Divide Split the array into two sub arrays that each element in the left
sub array is
less than or equal the middle element and each element in the
middle element. The splitting of the array into two sub
right sub array is greater than the
is
arrays based on pivot element. All the
elements that are less than pivot should be in left sub
array and all the elements that are more
than pivot should be in right sub
array.
Analysis And Design OfAlgoithmm
70

Elements that are Pivot element Elements that are


less than pivot greater than pivot

Fig.:(1)
2) Conquer : Recursively sort the two sub arrays.
(3) Combine : Combine all the sorted elements in a group to form a list of sorted elements.
of the positions of array elements. but
In merge sort the division array is based on
quick sort this divisionis based on actual value of the element. Consider an array A[] where i
ranging from 0 to n- I then we can formulize the division of array elements as
A[0...A[m-1].A[m].A[m-1 ]..An-1|

Thease elements are


Mid Thease elements are
less than A[m] greater than A[m}
Algorithm : The quick sort algorighm is performed using following
function- Quick and partition. Let us see them:
two important
Algorithm Quick (A[0O.n-1), low, high)
Problem Description:This algorithm performs
sorting of
the elements given in A [0.n- 1]
Array
l Input: An array A[O...n-1] in which unsorted elements are
/ given. The low indicates the leftmost element in the list
l and high indicates the rightmost element in the list
I Output: Creates a sub array which is sorted in
ascending
l order
if(lowhigh) then
/ split the array into two sub arrays
m-partition (A[low...high|) /i m is mid of the
array
Quick (A[low...m I])
Quick (A mid I..highj)
In above
algorithm call to partition algorithm is given. The
ofthe elements ascending order. The recursive quck routine is partition
in
for
pertorms arrangement
tists dividing the list in two sub
Analysis
Best cas (split in the middie):
brings the If the array
best case elhciency of an algorithm
is always partitioned at the mid. then
B.Tech 6h Semester Sohred papers, May -2018 71
The recurrence relation for quick sort for obtaining best case time complexity is.

Cn) = C (n/2) n

Time required Time required Time required


to sort leeft to sort right for partitioning
sub array sub array the sub array
and C(1)=0
Method 1: Using master theorem: We will solve equation I using mastertheorem.
The master theorem is
if fn) e e(r) then
(1) T(m) = e r ) ifa< bd
(2) Tn) = O(r log n) ifa = b

(3) T(n) = On°E6 ifa> b


We get,
C(n) =
2 C(n/2) + n

Here n ) E n' d=1


Now, a 2 and b = 2.
As from case 2 we get a = bd i.e. 2 = 2', we get

T(M) i.e. C(n)=O(n° log n)


C(n) (n log n)
Thus,
Best case time complexity of quick sort is e(n log, n).
Worst case (sorted array): The worst case for quick sort occurs when the pivot is a
minimum or maximum ofall the elements in the list. This can be graphically represented as

time required

n
n items

n-1
n- 1 items

n-2items

n-3items
3
1
Fig.: (2)
Analysis And Destgn OfAoni
72 rium
We can write it as
C(n) =
C(n 1) +n
+2 +
1
+(n - 1)+ (n -2)
or +...
Cn) =
n
But as we know

1+2+3+... +n nn+)-n?
2
C(n) e(n*).
The time complexity of worst case of quick sort is e(n).
n (n) - (n- 1)* (n -

1) =

2C (n- 1) +(2n -

I)
nCav (n)= (n + 1) * C . (n - 1)+ (2n 1) << (n + 1) * C a ( n - 1 ) + 2 n

If we assume

n+1)C(n-1)+ 2n (n+ 1) *C,(n1)+C'n


Then, (n+ 1) *C (n - 1)< (n+ 1) *C (n-1) +C'n
Divide this equation by n (n + 1) then
n g <(n+ 1)* Cg (n - 1) C'n
Cayg n) ag - ) C'_
(n +1) (n+1)
If we assume
avg (n)
A(n) =
(n+1)nen
then

C"
A(m)< A(n-1)+
(n+1) Adding and cancelling these
A(n-1)<A(n-2)+ equations we get,

A(n-2)<A(n-3)+ C4()<C 17 n+1


n-1
Cag n)= (n + 1)* A(n)
ie.(n+1)* A(n) <C"n +1) log n
::Covg(n) = O(n log m)

A1)« A(0)+
2
Hence average time
case
complexity of quick sort is e(n log n).
Average case (random array): Let, C,(n) denotes the average time
(A[I.. n]). of quick sor
The recurrence relation for random
input array is
C(n)= C(0) +
C(n -

1)+n
or C(n) =
C)*C(n -2)+n
or C(n) C(2) +C(n - 3) +n
or C(n) =
C(3) +C(n -4) +n
B. Tech 6h Semester Solhed paers, May-20 8 73

C(n) = C(n - 1) +C(0) + n


The average value of C(m) is sum of all the possible values divided by n.

Can) =(C)+ C(2) + . . + C(n -1))


+n

Multiplying both the sides by n we get,


n C(n) = 2 (C(1) + C(2) + ... + C(n- 1))+
Now we put n =n - I then,
( - )* C(n- 1) = 2 [C
(n (1) +Ca (2).+... +C(n 2)] + (n-1)* (n-
1)"(n- 1)
1)°c,(n-1)=2[C1)+ C(2)+. C(n -2) +(n-
+
n'a(n)-(n-

Timecomplexity ofquicksort
Best case Average case Worst case
O(n log.n) e(n log.n) e(n*)
Q.3.(b)State Matrix chain multiplication problem. How to solve this problem
with Dynamie programming? xplain. (10)
Ans. Matrix chain multiplication is an optimization problem that can be solved using
dynamic programming. Given a sequence of matrices, the most efficient way to
we want to find
but
multiply these matrices together. The problem is not actually to perform the multiplications,
merely to decide in which order to perform the multiplications.
Dynamic Programming Algorithm :To begin, let us assume that all we really want to
know is the minimum cost, or minimum number ofarithmetic operations, needed to multiply out
the matrices. If we are only multiplying two matrices, there is only one way to multiply them, so
the minimum cost is the cost of doing this.In general, we can find the minimum cost using the
following recursive algorithm:
- Take the sequence of matrices and separate it into two subsequyences.
- Find the minimum cost of multiplying out each subsequence.
- Add these costs together, and add in the cost of multiplying the two result matrices.
- Do this for each possible position at which the sequence of matrices can be split, and
take the minimum over all of them.
For example, if we have four matrices ABCD, we compute the cost required to find
each of(AX BCD), (AB)(CD), and (4BCXD), making recursive calls tofind the minimum cost to
compute 4BC, AB, CD, and BCD. We then choose the best one. Better still, this yields not only
the minimum cost, but also demonstrates the best way of doing the multiplication: group it the
the lowest total cost and do the same for each factor.
way that yields
Unfortunately, if we implement this algorithm we discover that it is just as slow as the
naive way of trying all permutations! What went wrong? The answer is that we're doing a lot of
redundant work. For example, above we made arecursive call to find the bestcost for computing
both 4BC and AB. But finding the best cost for computing ABC alsó requires finding the best
AnalysasAnd Design OfAlgor

necessary repeti
ost for 4B.As the recursion grows deeper. more and
OCcurs.

One memoization: each time we compute


the minimum
nimum c
simple solution is called we are ever
to co
asked to Compute
save it. If
c u e d to
multiply out a specific subsequence. we it. SNCe tnere are
about
again, we
we give thewhere
Simply savedn answer. and do not recompute
gan,
uerent subsequences, js the number of matrices, the space requirea to do thi_

brings
It can be shown that this simple trick the runtime down to O(n°') from O2
Casonable.
ncn 1smore than efficient enough for real applications. This is top-down dynamic programiming

Section B

Q.4. Explain the concept of minimum spanning trees. Solve the following graph
using prime's algorithm. Also analyse its complexity. (20)
O 28

10
14 16
6
24,
18/2
22 4
Ans. Minimum spanning trees: A minimum
spanning tree (MST) or minimum weight
spanning tree is then a spanning tree with weight less than or equal to the weight of every other
spanning tree. More generally, any undirected graph has a minimum spanning forest, which is a
union of minimum spanning trees for its connected
components.
Practical applications based on minimal
spanning trees include:
-Taxonomy, one of the earliest motivating applications.
-Cluster analysis: clustering points in the plane,
single-linkage
clustering, and clustering gene expression data. Constructing trees forclustering, graph-theoretic
networks. broadcasting in computer
-

Image registration and segmentation e.g see minimum


segmentation. spanning tree-based
-

Curvilinear feature extraction in computer vision.


-

Handwriting recognition of mathematical expressions.


Circuit design: implementing efticient multiple constant
impulse response filters. multiplications, as used in finite
Regionalisation of socio-geographic areas, the grouping of areas into
contiguousregions homogeneous.
-Comparing ecotoxicology data.
-

Topological observability in power systems.


- Measuring homogeneity of two-dmensional materials
- Minimax process control
BTech Semester. Solved pupers, Mav -2018

(28
10/ 10/
14/ 14
16
(T (3)
2
2524
1812 12
22
4)
(3) (b)
Fig.(1): Graph and its miniimum cost spanning tree
Let us obtain a pseudocode algorithm to find a minimum-cost spanning tree using prim s
method. The algorithm will start with a tree that includes only a minimum-cost edge of G. Then.
edges are added to this tree one by one. The next edge (i, ) to be added is such that i is a vertex
already included in the tree,j is a vertex not yet included, and the cost of (i, ) cost[i. i]. -is
minimum among all edges (k, ) such that vertex k is in the tree and vertex lis not in the tree. To
determine this edge (i, i) efficiently, we associate with each vertex j not yet included in the tree
a value near []. The value near ] is a vertex in the tree such that cost[i. near[i]] is minimum
among all choices for near [i]. We define near [i]=0 for all vertices that are already in the
tree. The next edge to include is defined by the vertex i such that near[]=0 (G not already in the
tree) and cost[/, near[]] is minimum.
In function Prim, line 9 selects a minimum-cost edge. Lines 10 to 15 initialize the variables
so as to represent a tree comprising only the edge (k. ). In the for loop of line 16 the remainder
of the spanning tree is built ug edge by edge. Lines 18 and 19 select (j. near[l) as the next edge
to include. Lines 23 to 25 update near[ ].
The time required by algorithm Prim is Or*), where n is the number of vertices in the
graph G. To see this, note that line 9 takes O(EI) time and line 10 takes (1) time. To see this.
note that line 9 takes O(E|) time and line 10 takes (I) time. The for loop ofline 12 takes en)
time. Lines 18 and 19 and the for loop of line 23 require Oun) time. So. each iteration ofthe for
loop of line 16 takes On) time. The total time for the for loop of line 16 is therefore O )
Hence, Prim runs in O(n) time.
Ifwe store the nodes not yet included in the tree as a red-black tree. lines 18 and 19 take
O(log n) time. Note that a red-black tree supports the following operations in Qlog n) time:
insert, delete (an arbitrary element), find-min, and search (for an arbitrary element). The overall
frequency is O(E|). Updating in lines 24 and 25 also takes O(log n) time (since an update can be
done using a delete and an insertion into
the red-black tree). Thus the overall run time is On-
E) log n).
The algorithm can be speeded a bit by making the observation that a minimum-cost
spanning tree includes for each vertex v aminimum-cost edge incident to v. To see this, suppose
76 Analysis And Design OfAlgoritim
tis minimum-cost spanning tree for G (V, E). Let r be ay vertex In i. Let (r, w) be an
a =

with minimum cost


among all edges incident to r. Assume that (r. 1W) E E(1) and costlr
cost[v, x] for all edges (, x) E E). The inclusion of (v, w) into I creates a
unique cycle Th
cycle must include an edge (v, x), x * w. Removing (r, *) from Et)
UROr, w)} breaks this eycl
without disconnecting the graph (V,
E(JU {r, w)}). Hence, (V, E(1)U {(r, ")} (r, r)}) is
a
spanning tree. Since cost[v, w] < cost[v, x], this spanning tree has lowe -cost than al
i. Thi
contradicts the assumption that t is a minimum-cost
cost
spanning tree of G. So, 1 includes
minimum
edgesstated above.
as

From this observation it follows that


we can start the
any arbitrary vertex and no algorithm with a tree consisting of
edge. Then edges can be added one
by one. The changes needed are
to lines 9 to 17. These
lines can be replaced by the lines
9 mincost: =0;
10 for i : 2 to n do
11 near[i]: 1: =

/Vertex l is initially in t.
12 near[1]=0;
13-16 for i= 1 to n - I do
17 {// Find n - 1 edges for t.

Q.5. Explain Optimal Binary search tree which


4 and includes following
(4, a, d,, a,)
profits P (1 : 4) (3, 3, 1,problem:n
(do, is, int, while) with
=

successful search & loss q (0 :


4) (2, 3, 1, 1, 1) in case
=
1) in case o
W i, i)=q (], c(i, of unsuccessful
i) =0
analyse its complexity. andr(i, i) 0 where 0 < i<=4. Also write its
= search. Initialy
Ans. algorithm and
Using the observation ir (i, i) =
Pi) + q)+ w(i, j (20)
w(0, 1)
-

1), we get
P(1) +q(1)+w(0, 0) -8
=

c(0, 1) r(0, 1) + min {e(0,


=

r(0, 1) =1 0)+ c(1, 1)} =


8
w(1,2) P(2) + 9(2)+ w(1, 1) =7
=

c(1,2) w(l, 2) + min {c(1, 1) +


r(0,2) =2 c(2, 2)} =
7
w(2,3) P(3) + q(3) + w(2, 2) =3
c(2,3) w(2, 3) + min {c(2, 2) +
=

c(3, 3)}
3 =

r(2,3)=3
w(3,4) P(4) +q(4)+
w(3, 3) =3
c3,4) w{3,4) + min {c(3, 3) +
c(4, 4)} 3 =

r(3,4)=4
Knowing w(i, it )and ci, it ),
0cic4,
+2). 0< i<3. This process can be repeated until we can use to compute w(i. i+2), and r{i.
w{(0, 4). c(0, 4), and
he of Fie.( 1) shows the results of this computation. r(0, 4) are obtained. The
The box in row i and
values of w(i. i t i). cli.jt ) and r .i + i) ctively. ne column shows
8 Toch Semester. Solhed popers. May-2018 77
4

Wo2|
Coo0
Wi3w wn
C 0C20 C 0 C4 0
a20

WoI8
Wi 7| W23w43
Co 8
2 2 rzy 3 u4

Wo12 WIy9| W45S


C02 19 C 12 Cu8
ro2 |ris 2 243

a 1 4 W14 I1|
Co3 25|C1419
Fay 2| r14
W0416
c
Fig.(): Computation of c(0,4), w(0,4), and r(0,4)
The computation is carried out by row from row 0 to row 4. From the table
we see that
c(0. 4) 32 is the minimum cost of a binary search tree for (a, a,, d,, a,). The root
a.. Hence, the left subtree is and the right subtree
of tree t is
Tree , has root a, and subtrees and
Tree 1., has root a; its left subtree is and its right subtree / Thus, with the data in the
table it is possible to reconstruct Fig.(2)shows

do int

while
Fig.(2):0ptimal search tree
Complexity: Let us examine the complexity of this procedure to evaluate the c's and
r's. The evaluation procedure described in the above
problem requires us to compute c(i, i) for
( - )- 1, 2. n in that order. When i i m, there are n m + l e(i.,
..., - =

iY's to compute. The


-

computation of each of these c(i, /)'s requires us to find the minimum of m


each such c(i, /) can be computed in time O(m). The total time for quantities. Hence,
therefore Onm m). The total time to evaluate all the
alle(i. )'s with / i m is - =

c(i. /y's and rti, i)'s is therefore


(nm-m*) -o(n')
ISmsn
78 Analysis And 1esigu OfAlgorlim
Section - C

9.6. Explain 8-queens method, graph coloring and Tlamiltonian cycle. D


Ans. 8-Queen's problem : Consider a chessboard of order 8 * 8. The problem is
(20)
place 8 queens on this board such that no two queens can attack other. That means no tw
queens can be placed on the same row, column or diagonal
The solution to 8-queens problem can be obtained using backtracking method.
The solution can be given as below:

3 4 6 7 8

Graph coloring: The backtracking search tree starts at non-labeled


root. The immediate
children of the root represent the available choices
(vertex a in this case) of the graph. However, the nodes
regarding the coloring of the first vertex
The first (leftmost) child of the root of the tree are generated only if necessary
represents
the first
which is to color the vertex with color I -this is
choice regarding the
coloring of vertex
indicated by the label
far this partial coloring does not violate the constraint assigned to that node. So
that adjacent vertices be
colors. So we grow the tree one imore level where we assigned differen
(vertex b) of the graph. The leftmost node would
try to assign a color to the second
represent the coloring of vertex b with verte
However, such partial coloring where a | and b=l is color
invalid since the vertices
adiacent. Also, there is no point of and
growing the tree further from this node since. no matter b art
olors we assign to the remaining vertices,
a

we will not
get a valid coloring for all wh
Sa we orune
(abandon) this path and consider an alternative of the vertices
next choice for the coloring of vertex b, path
which is to color it with
near where we are.
We try the
h-2 is a valid partial color 2. The
coloring so we continue to the next level of search coloring a I and
=

reach the lowest level and try lo color


vertex e, we find that
it is not
the tree. When
three colors; therefore, we possible to color it
eex d. We see that thebacktrack any of tn
to
the previous level and try the next
coloring of vertex d with color 2 choice for the colorin
aiacent and have sane leads to a violation
colO1). we continue
to d-3 and
then down to e = (d andb
solution, {u 1,h-2.c= 3, d=3, e =1} 1, which gives t
Blech 6"
Semeste Solved papers, Aay-2018 79

o u i n n cycle: his is a problem in which Mraph i is accepted as input and n is


avked l hed a simple cyele in i Ihat visits each vertex of G exactly once and returns to its
starting vertex. Such a cycle
is called an Maniltonin
eycle.
Theonem: HAMILTONIAN CYCIE is in NP.
: e t A be some non-deterministic algorithm to which graph G is given as input
he vertices ol graph are numbered from 1 to N. We have to callI the alyorithn recursively in
order to get the sequence S. This sequence will have all the vertices without getting repeated.

he vertex from wlichthe sequence starts must be ended at the end. This check on the sequence
Smust be made in
polynomial time n. Now if there is a Hamiltonian cycle in the graph lhe
algorithn will outpul "yes". Similarly if we get output of' algorithm as "yes" then we could guess
thecyclein Gwith every verlex appearing exactly once and the first visited vertex getting visited
at the last. That means A non-deterministically accepts the language HAMILIONIAN CYCLE
It is therefore proved that HAMILTONIAN CYCLE is in NP.

Q.7. Solve the following problem by usingleast cost Branch & Bound method:
Knapsack instance n = 4, p(1: 4) = (10, 10, 12, 18) and weight w (I : 4) = (2, 4.
6,9)& max. capacily im= 15. (20)
Ans. The search begins with the root as the E-node. For this code, node I of Fig.(1), we
have e ( 1 ) - 38 and u(1)- 32

-38
-32

-38 -32
-32
3

-38 -36
-32 O22 5
-38 38
-32 -38
6
-38 -20
38
8
-20
9
Upper number = c
Lower number = u

Fig(1): LC branch-and-bound tree

computation of i(1) and c() is done as follows. The bound ( ) has a value
The
UBound (0, 0, 0, 15). UBound scans through the objects fron left to
these objects into the knapsack until the first object that doesn't fit right starting from/. it adds
is encountered. At this time
the negation of the total profit of all the objects in the
Knapsack plus cw is returned. In I unction
Analysis And Design OfAlgontis
80
incremented by 2. 4, a n d .
1.2, and 3, c gets nd 6
of zero. For i
=

and b start with value


ound,
c a
10. 10. and 12. respectively. When i=
respectively. The h also gets decremented by
variable Function Bound is similars.
returned is -32.
the test (c u{i}n) fails and hence the value
+

't fit the knapsack


considers fraction of the first object that doesn
UBound, except that it also a
fit is 4 whose weight is 9. The total
o r example, in computing c(1), the first object, that doesn't

of the object 4 and


considers a fraction
weight of the 1, 2, and 3 is 12. So, Bound
objects

hence returns32-*18 =-38.


9 ans
=
0 and upper =-32 (ans beinga
Since node I is not a solution node, LCBB sets
nodes). The E-node is expanded
and its two children.
variable to store intermediate answer

(3)=-32, u(2) 32, and u(3)


=-
=
-27. Both
nodes 2 and 3, generated). Thecost (2) =-38, and nodes 4
nodes are putthe list of live nodes. Node 2 is the next E-node. It is expanded
onto
and 5 generated. Both nodes get added to the list of live nodes. Node 4
is the live node with least
value and becomes the next E-node. Nodes 6 and 7 are generated. Assuming node 6is
generated first, it is added to the list of live nodes. Next, node 7 joins this list and upper is updated
to-38. The next E-node will be one ofnodes 6 and 7. Let us assume it is node 7. Its two children
are nodes 8 and 9. Node 8 is a solution node. Then upper is updated to-38 and node 8 is put onto
the live nodes list. Node 9 has &(9)>upper and is killed immediately. Nodes 6 and 8 are two live
nodes with least é. Regardless of which becomes the next E-node, C(E)2 upper and the search
terminates with node 8 the answer node. At this time, the value -38 together with the path 8. 7.
4. 2.1 is printed out and the algorithm terminates. From the path one cannot figure outthe
ignment of values to the x,s suchthat 2, PX upper. Hence, a proper implementation of
LCBB has to keep additional information from which the values of the x's can be extracted
One way is to associate with each node a one bit field, tag. The sequence of tag bits from the
answer node to the root give the x, values. Thus, we have tag(2) = tag(4) = tag(6) = tag(8) = 1
and tag(3) = tag(5) = tag(7) = tag(9) = 0. The tag sequence for the path 8, 7, 4, 2, 1 is 101 1and

r, 1, X, 1,x, 0 , r, 1, and x,
= =
1.

Section - D

Q.8. Write short note on


(20
(a) Satisfiability problem
(b) Clique decision problem

(c) Reducibility.
Ans. (a) Satisfiability problem
CNF - SAT problem: This problem is based
Boolean formula. The Boolea
on
o r l a has various Boolean operalions Such as OR(+), AND (.) and NOT. There are som
ncHations such as (means Impies) and > (means if and only in

A Boolean formula
Conjuctive Normal Form (CNF) ifit
is in
is formed
Subexpressions.
called clauses
These subexpressions are
as collection
Forexample: (a *b*d*g) (c e)(b d+f.h)(a+ c+e +h)
81
R Tech Semester Sohed papers. May -2018

This formula evaluates to I ifb, c, d are 1.


The CNF-SAT is a problem which takes Boolean formula in CNF form and checks

whether any assignment is there to Boolean values so that formula evaluates to

Theorem:CNF-SAT is in NP complete.
Proof Let S be the Boolean formula for which we can construct simple n o n
a

deterministic which can guess the values of variables


alogirithm in Boolean formula and then
evaluates each clause of S. If all the clauses evaluate S to I then S is satisfied. Thus CNF-SAT

is in NP-complete.
formula S
2)A JSAT problem: A 3SAT problem is a problem which takes a Boolean
in CNF form with each clause having exactly three literals and check whether S is satisfied or

not. [Note that CNF means each literal is ORed to form a clause, and each clause is ANDed to

form Boolean formula S].


Following formula is an instance of3SAT problem:
(+b+g)Mc + +SMb +d+fMa+e+h)
Theorem:3SAT is in NPcomplete.
Proof: Let S be the Boolean formula having 3 literals in each clause for which we can

construct a simple non-deterministic algorithm which can guess an assignment of Boolean values
to S. Ifthe S evaulated as 1 then S is satisfied. Thus we can prove that 3SAT is in NP-complete.

Ans.(bClique Decision problem (CDP) : The Conjective Normal Form (CNF) a


clique decision problem and using transtivity of reducibility and above result we conclude that
satisfiability reduces to conjective normal form-satisfiability. Hence satisfiability reduces to clique
decision problem and clique decision problemis NP hard. Since clique decision problem belongs
to NP, so clique decision problem is also NP complete.

Ans.(c) Reducibility: To prove whether particular problem is NP complete or not we


use polynomial time reducibility. That means if

A PolyB and B- PolyC then A- Poly C.


The reduction is an important task in NP completeness proofs. This can be illustrated by

fig(1
Various types ofreductions are

Local replacement: In this reduction A > B by dividing input to A in the form of

components and then these components can be converted to components of B.


Component design: In this reduction A Bby building special component for input B
hat enforce propertied required by A.
82 Analysis And Design Of Algor1thms
NP
ery problen in

CIRCUIT-SAT

CNF SAT

3 SAT

VERTEX COVER

CLIQUE SET COVER SUBSET SUM IAMILTONIAN-CYCLE

KNAPSACK
TSP

Fig.(1) Reduction in NP competeness


Q.9.(a) State & Prove Cook's theorem.
Ans. Cook's theorem, states that the Boolean (10)
satisfiability problem is NP-complete. That
any problem in NP can be reduced polynomial
in time by a deterministie
the problem of determining whether a Boolean formula is Turing machine to
satisfiable.
Proof-idea:
i) First show that SAT is in NP.
G) Then show that every problem in NP is
Proof (i):
polynomial-time reducible to SAT.
tt is easy to that SAT is in
see
NP. Simply non-deterministically choose Boolean values
ch variable. Then check whether the
expression is true or false for those values.
B Tech 6 Semester Solved papers.
May -2018 83
If it is true, the answer is
"yes."
If it is false, the answer is no."
Proof (ii)
Let A be a problem in NP.
Then there is a nondeterministic luring machine M that decides A in polynomial time
Let the run time of M be at most n k, for some k 0. When M is run, it can access at most the úrst
nk+I cells.
Example: Cook'stheorem proved SATISFIABILITY was hard by usinga polynomial
time reduction translating each problem in NP into an instance of SAT. Since a polynomial time
algorithm of SAT would imply a polynomial time algorithm for everything in NP, SAT is NP hard.
Since we can guess a solution to SAT, it is in NP and thus NP-complete.
The proof of Cook's Theorem,while quite clever, was certainly difficult and complicated.
We had to show that all problems in NP could be reduced to SAT to make sure we did not miss
a hard one. But now that we have a known NP-complete problem in SAT. For any other problem.
we can prove it NP-hard of two polynomial time reductions can be done in polynomial time, all
we need show is that SAT, that is, any instance of SAT can be translated to an instance of X in
polynomial time.
Yes

All Problemns
problems SAT Problem K
in NP X
Solver

No
Cook's Theorem Polynomial Polynomial
Reduction from algorithm means
SAT to X P- NP

Q.9.(b) Show that Job sequencing with deadline is NP hard problem.


(10)
Ans. The problem is stated as below.
There are n jobs to be processed on a machine.
Each job i has a deadline de"0 and profit pe"o.
Pi is earned iffthe job is completed by its deadline.
The job is completed if it is processed on a machine for unit time.
Only one machine is available for processingjobs.
Only one job is processed at a time on the machine.
A feasible solution is a subset ofjobs J such that each job is completed
by its deadline.
An optimal solution is a feasible solution with maximum profit value.
OfAlgurtikms
84 Arnalysis And Design
A scheduling problem is NP-hard in the ordinary sense if t
with a polynomia
can be reduced to this problem
partition (or a similar problem)
algorithm and
time complexity that solves thescneuu
there
isan algorithm with pseudopolynomial
problem.

You might also like