UNIT IV:
Backtracking: General method, applications-n-queen problem, sum of subsets problem, graph
coloring, Hamiltonian cycles.
Branch and Bound: General method, applications - Travelling sales person problem,0/1
knapsack problem- LC Branch and Bound solution, FIFO Branch and Bound solution.
Backtracking (General method)
Many problems are difficult to solve algorithmically. Backtracking makes it possible to solve at
least some large instances of difficult combinatorial problems.
Suppose you have to make a series of decisions among various choices, where
     You don’t have enough information to know what to choose
     Each decision leads to a new set of choices.
     Some sequence of choices ( more than one choices) may be a solution to your problem.
Backtracking is a methodical (Logical) way of trying out various sequences of decisions, until
you find one that “works”
Example@1 (net example) : Maze (a tour puzzle)
Given a maze, find a path from start to finish.
    In maze, at each intersection, you have to decide between 3 or fewer choices:
               Go straight
               Go left
               Go right
    You don’t have enough information to choose correctly
    Each choice leads to another set of choices.
    One or more sequences of choices may or may not lead to a solution.
    Many types of maze problem can be solved with backtracking.
Example@ 2 (text book):
Sorting the array of integers in a[1:n] is a problem whose solution is expressible by an n-tuple
xi is the index in ‘a’ of the ith smallest element.
The criterion function ‘P’ is the inequality a[xi]≤ a[xi+1] for 1≤ i ≤ n
Si is finite and includes the integers 1 through n.
misize of set Si
m=m1m2m3---mn n tuples that possible candidates for satisfying the function P.
With brute force approach would be to form all these n-tuples, evaluate (judge) each one with P
and save those which yield the optimum.
By using backtrack algorithm; yield the same answer with far fewer than ‘m’ trails.
Many of the problems we solve using backtracking requires that all the solutions satisfy a
complex set of constraints.
For any problem these constraints can be divided into two categories:
DESIGN AND ANALYSIS OF ALGORITHMS                                                         Page 86
    Explicit constraints.
    Implicit constraints.
Explicit constraints: Explicit constraints are rules that restrict each xi to take on values only
from a given set.
Example: xi ≥ 0 or si={all non negative real numbers}
Xi=0 or 1 or Si={0, 1}
li ≤ xi ≤ ui or si={a: li ≤ a ≤ ui }
The explicit constraint depends on the particular instance I of the problem being solved. All
tuples that satisfy the explicit constraints define a possible solution space for I.
Implicit Constraints:
The implicit constraints are rules that determine which of the tuples in the solution space of I
satisfy the criterion function. Thus implicit constraints describe the way in which the Xi must
relate to each other.
Applications of Backtracking:
     N Queens Problem
     Sum of subsets problem
     Graph coloring
     Hamiltonian cycles.
N-Queens Problem:
It is a classic combinatorial problem. The eight queen’s puzzle is the problem of placing eight
queens puzzle is the problem of placing eight queens on an 8×8 chessboard so that no two
queens attack each other. That is so that no two of them are on the same row, column, or
diagonal.
The 8-queens puzzle is an example of the more general n-queens problem of placing n queens on
an n×n chessboard.
Here queens can also be numbered 1 through 8
Each queen must be on a different row
Assume queen ‘i’ is to be placed on row ‘i’
All solutions to the 8-queens problem can therefore be represented a s s-tuples(x1, x2, x3—x8)
xi the column on which queen ‘i’ is placed
si{1, 2, 3, 4, 5, 6, 7, 8}, 1 ≤ i ≤8
Therefore the solution space consists of 88 s-tuples.
The implicit constraints for this problem are that no two xi’s can be the same column and no two
queens can be on the same diagonal.
By these two constraints the size of solution pace reduces from 88 tuples to 8! Tuples.
Form example si(4,6,8,2,7,1,3,5)
DESIGN AND ANALYSIS OF ALGORITHMS                                                             Page 87
In the same way for n-queens are to be placed on an n×n chessboard, the solution space consists
of all n! Permutations of n-tuples (1,2,----n).
                              Some solution to the 8-Queens problem
Algorithm for new queen be placed                All solutions to the n·queens problem
Algorithm Place(k,i)                             Algorithm NQueens(k, n)
//Return true if a queen can be placed in kth    // its prints all possible placements of n-
row & ith column                                 queens on an n×n chessboard.
//Other wise return false                        {
{                                                for i:=1 to n do{
for j:=1 to k-1 do                               if Place(k,i) then
if(x[j]=i or Abs(x[j]-i)=Abs(j-k)))              {
then return false                                X[k]:=I;
return true                                      if(k==n) then write (x[1:n]);
}                                                else NQueens(k+1, n);
                                                 }
                                                 }}
DESIGN AND ANALYSIS OF ALGORITHMS                                                              Page 88
Sum of Subsets Problem:
Given positive numbers wi 1 ≤ i ≤ n, & m, here sum of subsets problem is finding all subsets of
wi whose sums are m.
Definition: Given n distinct +ve numbers (usually called weights), desire (want) to find all
combinations of these numbers whose sums are m. this is called sum of subsets problem.
To formulate this problem by using either fixed sized tuples or variable sized tuples.
Backtracking solution uses the fixed size tuple strategy.
For example:
If n=4 (w1, w2, w3, w4)=(11,13,24,7) and m=31.
Then desired subsets are (11, 13, 7) & (24, 7).
The two solutions are described by the vectors (1, 2, 4) and (3, 4).
In general all solution are k-tuples (x1, x2, x3---xk) 1 ≤ k ≤ n, different solutions may have
different sized tuples.
    Explicit constraints requires xi ∈ {j / j is an integer 1 ≤ j ≤ n }
    Implicit constraints requires:
     No two be the same & that the sum of the corresponding wi’s be m
     i.e., (1, 2, 4) & (1, 4, 2) represents the same. Another constraint is xi<xi+1 1 ≤ i ≤ k
Wi weight of item i
DESIGN AND ANALYSIS OF ALGORITHMS                                                                Page 89
M Capacity of bag (subset)
Xi the element of the solution vector is either one or zero.
Xi value depending on whether the weight wi is included or not.
If Xi=1 then wi is chosen.
If Xi=0 then wi is not chosen.
The above equation specify that x1, x2, x3, --- xk cannot lead to an answer node if this condition
is not satisfied.
The equation cannot lead to solution.
Recursive backtracking algorithm for sum of subsets problem
Algorithm SumOfSub(s, k, r)
{
X[k]=1
If(S+w[k]=m) then write(x[1: ]); // subset found.
Else if (S+w[k] + w{k+1] ≤ M)
Then SumOfSub(S+w[k], k+1, r-w[k]);
 if ((S+r - w{k] ≥ M) and (S+w[k+1] ≤M) ) then
{
X[k]=0;
SumOfSub(S, k+1, r-w[k]);
}
}
Graph Coloring:
DESIGN AND ANALYSIS OF ALGORITHMS                                                           Page 90
Let G be a undirected graph and ‘m’ be a given +ve integer. The graph coloring problem is
assigning colors to the vertices of an undirected graph with the restriction that no two adjacent
vertices are assigned the same color yet only ‘m’ colors are used.
The optimization version calls for coloring a graph using the minimum number of coloring.
The decision version, known as K-coloring asks whether a graph is colourable using at most k-
colors.
Note that, if ‘d’ is the degree of the given graph then it can be colored with ‘d+1’ colors.
The m- colorability optimization problem asks for the smallest integer ‘m’ for which the graph G
can be colored. This integer is referred as “Chromatic number” of the graph.
Example
            Above graph can be colored with 3 colors 1, 2, & 3.
            The color of each node is indicated next to it.
            3-colors are needed to color this graph and hence this graph’ Chromatic Number
               is 3.
      A graph is said to be planar iff it can be drawn in a plane (flat) in such a way that no two
       edges cross each other.
      M-Colorability decision problem is the 4-color problem for planar graphs.
      Given any map, can the regions be colored in such a way that no two adjacent regions
       have the same color yet only 4-colors are needed?
      To solve this problem, graphs are very useful, because a map can easily be transformed
       into a graph.
      Each region of the map becomes a node, and if two regions are adjacent, then the
       corresponding nodes are joined by an edge.
           o Example:
                    o
   The above map requires 4 colors.
    Many years, it was known that 5-colors were required to color this map.
DESIGN AND ANALYSIS OF ALGORITHMS                                                            Page 91
    After several hundred years, this problem was solved by a group of mathematicians with
      the help of a computer. They show that 4-colors are sufficient.
Suppose we represent a graph by its adjacency matrix G[1:n, 1:n]
       Ex:
Here G[i, j]=1 if (i, j) is an edge of G, and G[i, j]=0 otherwise.
Colors are represented by the integers 1, 2,---m and the solutions are given by the n-tuple (x1,
x2,---xn)
xi Color of node i.
State Space Tree for
n=3 nodes
m=3colors
 1st node coloured in 3-ways
 2nd node coloured in 3-ways
 3rd node coloured in 3-ways
 So we can colour in the graph in 27 possibilities of colouring.
DESIGN AND ANALYSIS OF ALGORITHMS                                                           Page 92
      Finding all m-coloring of a graph                          Getting next color
 Algorithm mColoring(k){                            Algorithm NextValue(k){
 // g(1:n, 1:n) boolean adjacency matrix.          //x[1],x[2],---x[k-1] have been assigned
 // kindex (node) of the next vertex to          integer values in the range [1, m]
color.                                              repeat {
 repeat{                                            x[k]=(x[k]+1)mod (m+1); //next highest
 nextvalue(k); // assign to x[k] a legal color.   color
 if(x[k]=0) then return; // no new color            if(x[k]=0) then return; // all colors have
possible                                          been used.
 if(k=n) then write(x[1: n];                        for j=1 to n do
 else mcoloring(k+1);                               {
 }                                                  if ((g[k,j]≠0) and (x[k]=x[j]))
 until(false)                                       then break;
 }                                                  }
                                                    if(j=n+1) then return; //new color found
                                                    } until(false)
                                                    }
 Previous paper example:
 Adjacency matrix is
DESIGN AND ANALYSIS OF ALGORITHMS                                                                Page 93
Hamiltonian Cycles:
   Def: Let G=(V, E) be a connected graph with n vertices. A Hamiltonian cycle is a round
      trip path along n-edges of G that visits every vertex once & returns to its starting
      position.
   It is also called the Hamiltonian circuit.
   Hamiltonian circuit is a graph cycle (i.e., closed loop) through a graph that visits each
      node exactly once.
   A graph possessing a Hamiltonian cycle is said to be Hamiltonian graph.
      Example:
           In graph G, Hamiltonian cycle begins at some vertiex v1         ∈ G and the vertices
              of G are visited in the order v1,v2,---vn+1, then the edges (vi, vi+1) are in E, 1 ≤ i ≤
              n.
                                                                             g1
 The above graph contains Hamiltonian cycle: 1,2,8,7,6,5,4,3,1
 The above graph contains no Hamiltonian cycles.
     There is no known easy way to determine whether a given graph contains a
      Hamiltonian cycle.
     By using backtracking method, it can be possible
        Backtracking algorithm, that finds all the Hamiltonian cycles in a graph.
        The graph may be directed or undirected. Only distinct cycles are output.
        From graph g1 backtracking solution vector= {1, 2, 8, 7, 6, 5, 4, 3, 1}
        The backtracking solution vector (x1, x2, --- xn)
           xi ith visited vertex of proposed cycle.
DESIGN AND ANALYSIS OF ALGORITHMS                                                             Page 94
              By using backtracking we need to determine how to compute the set of possible
                 vertices for xk if x1,x2,x3---xk-1 have already been chosen.
                 If k=1 then x1 can be any of the n-vertices.
  By using “NextValue” algorithm the recursive backtracking scheme to find all Hamiltoman
cycles.
  This algorithm is started by 1st initializing the adjacency matrix G[1:n, 1:n] then setting x[2:n]
to zero & x[1] to 1, and then executing Hamiltonian (2)
            Generating Next Vertex                           Finding all Hamiltonian Cycles
  Algorithm NextValue(k)                              Algorithm Hamiltonian(k)
  {                                                   {
  // x[1: k-1] is path of k-1 distinct vertices. Repeat{
  // if x[k]=0, then no vertex has yet been           NextValue(k); //assign a legal next value to
assigned to x[k]                                      x[k]
  Repeat{                                             If(x[k]=0) then return;
  X[k]=(x[k]+1) mod (n+1); //Next vertex              If(k=n) then write(x[1:n]);
  If(x[k]=0) then return;                             Else Hamiltonian(k+1);
  If(G[x[k-1], x[k]]≠0) then                          } until(false)
  {                                                   }
  For j:=1 to k-1 do if(x[j]=x[k]) then break;
  //Check for distinctness
  If(j=k) then //if true , then vertex is distinct
  If((k<n) or (k=n) and G[x[n], x[1]]≠0))
  Then return ;
  }
  }
Until (false);
}
                                         Branch & Bound
   Branch & Bound (B & B) is general algorithm (or Systematic method) for finding optimal
   solution of various optimization problems, especially in discrete and combinatorial
   optimization.
    The B&B strategy is very similar to backtracking in that a state space tree is used to solve
   a problem.
    The differences are that the B&B method
    Does not limit us to any particular way of traversing the tree.
    It is used only for optimization problem
    It is applicable to a wide variety of discrete combinatorial problem.
    B&B is rather general optimization technique that applies where the greedy method &
   dynamic programming fail.
    It is much slower, indeed (truly), it often (rapidly) leads to exponential time complexities
   in the worst case.
    The term B&B refers to all state space search methods in which all children of the “E-
   node” are generated before any other “live node” can become the “E-node”
    Live node is a node that has been generated but whose children have not yet been
   generated.
    E-nodeis a live node whose children are currently being explored.
DESIGN AND ANALYSIS OF ALGORITHMS                                                             Page 95
   Dead node is a generated node that is not to be expanded or explored any further. All
  children of a dead node have already been expanded.
   Two graph search strategies, BFS & D-search (DFS) in which the exploration of a new
  node cannot begin until the node currently being explored is fully explored.
   Both BFS & D-search (DFS) generalized to B&B strategies.
   BFSlike state space search will be called FIFO (First In First Out) search as the list of
  live nodes is “First-in-first-out” list (or queue).
   D-search (DFS) Like state space search will be called LIFO (Last In First Out) search
  as the list of live nodes is a “last-in-first-out” list (or stack).
   In backtracking, bounding function are used to help avoid the generation of sub-trees that
  do not contain an answer node.
   We will use 3-types of search strategies in branch and bound
  1) FIFO (First In First Out) search
  2) LIFO (Last In First Out) search
  3) LC (Least Count) search
  FIFO B&B:
  FIFO Branch & Bound is a BFS.
  In this, children of E-Node (or Live nodes) are inserted in a queue.
  Implementation of list of live nodes as a queue
   Least() Removes the head of the Queue
   Add() Adds the node to the end of the Queue
  Assume that node ‘12’ is an answer node in FIFO search, 1st we take E-node has ‘1’
DESIGN AND ANALYSIS OF ALGORITHMS                                                      Page 96
  LIFO B&B:
  LIFO Brach & Bound is a D-search (or DFS).
  In this children of E-node (live nodes) are inserted in a stack
  Implementation of List of live nodes as a stack
   Least() Removes the top of the stack
   ADD()Adds the node to the top of the stack.
  Least Cost (LC) Search:
  The selection rule for the next E-node in FIFO or LIFO branch and bound is sometimes
  “blind”. i.e., the selection rule does not give any preference to a node that has a very good
  chance of getting the search to an answer node quickly.
  The search for an answer node can often be speeded by using an “intelligent” ranking
  function. It is also called an approximate cost function “Ĉ”.
  Expended node (E-node) is the live node with the best Ĉ value.
  Branching: A set of solutions, which is represented by a node, can be partitioned into
  mutually (jointly or commonly) exclusive (special) sets. Each subset in the partition is
  represented by a child of the original node.
  Lower bounding: An algorithm is available for calculating a lower bound on the cost of any
  solution in a given subset.
  Each node X in the search tree is associated with a cost: Ĉ(X)
  C=cost of reaching the current node, X(E-node) form the root + The cost of reaching an
  answer node form X.
  Ĉ=g(X)+H(X).
  Example:
  8-puzzle
  Cost function: Ĉ = g(x) +h(x)
  where       h(x) = the number of misplaced tiles
                      and g(x) = the number of moves so far
  Assumption: move one tile in any direction cost 1.
DESIGN AND ANALYSIS OF ALGORITHMS                                                       Page 97
  Note: In case of tie, choose the leftmost node.
DESIGN AND ANALYSIS OF ALGORITHMS                   Page 98
  Travelling Salesman Problem:
  Def:-      Find a tour of minimum cost starting from a node S going through other nodes
  only once and returning to the starting point S.
                                                                       2 n
  Time Conmlexity of TSP for Dynamic Programming algorithm is O(n 2 )
  B&B algorithms for this problem, the worest case complexity will not be any better than
  O(n22n) but good bunding functions will enables these B&B algorithms to solve some
  problem instances in much less time than required by the dynamic programming alogrithm.
  Let G=(V,E) be a directed graph defining an instances of TSP.
  Let Cij cost of edge <i, j>
  Cij =∞ if <i, j> ∉ E
  |V|=n total number of vertices.
  Assume that every tour starts & ends at vertex 1.
  Solution Space S= {1, Π , 1 / Π is a permutation of (2, 3. 4. ----n) } then |S|=(n-1)!
  The size of S reduced by restricting S
   Sothat (1, i1,i2,-----in-1, 1}∈ S iff <ij, ij+1>∈     E. O≤j≤n-1, i0-in=1
  S can be organized into “State space tree”.
  Consider the following Example
           State space tree for the travelling salesperson problem with n=4 and i0=i4=1
  The above diagram shows tree organization of a complete graph with |V|=4.
  Each leaf node ‘L’ is a solution node and represents the tour defined by the path from the root
  to L.
  Node 12 represents the tour.
  i0=1, i1=2, i2=4, i3=3, i4=1
  Node 14 represents the tour.
  i0=1, i1=3, i2=4, i3=2, i4=1.
  TSP is solved by using LC Branch & Bound:
  To use LCBB to search the travelling salesperson “State space tree” first define a cost
  function C(.) and other 2 functions Ĉ(.) & u(.)
  Such that Ĉ(r) ≤ C(r) ≤ u(r)       for all nodes r.
  Cost C(.) is the solution node1 with least C(.) corresponds to a shortest tour in G.
DESIGN AND ANALYSIS OF ALGORITHMS                                                          Page 99
  C(A)={Length of tour defined by the path from root to A if A is leaf
         Cost of a minimum-cost leaf in the sub-tree A, if A is not leaf }
   From1       Ĉ(r) ≤ C(r) then Ĉ(r)  is the length of the path defined at node A.
  From previous example the path defined at node 6 is i0, i1, i2=1, 2, 4 & it consists edge of
  <1,2> & <2,4>
  Abetter Ĉ(r) can be obtained by using the reduced cost matrix corresponding to G.
   A row (column) is said to be reduced iff it contains at least one zero & remaining entries
  are non negative.
   A matrix is reduced iff every row & column is reduced.
  Given the following cost matrix:
     The TSP starts from node 1: Node 1
     Reduced Matrix: To get the lower bound of the path starting at node 1
        Row # 1: reduce by 10         Row #2: reduce 2                Row #3: reduce by 2
         Row # 4: Reduce by 3:         Row # 5: Reduce by 4          Column 1: Reduce by 1
DESIGN AND ANALYSIS OF ALGORITHMS                                                     Page 100
         Column 2: It is reduced.           Column 3: Reduce by 3    Column 4: It is reduced.
                                                                     Column 5: It is reduced.
  The reduced cost is: RCL = 25
  So the cost of node 1 is: Cost (1) = 25
  The reduced matrix is:
     Choose to go to vertex 2: Node 2
        -   Cost of edge <1,2> is: A(1,2) = 10
        -   Set row #1 = inf since we are choosing edge <1,2>
        -   Set column # 2 = inf since we are choosing edge <1,2>
        -   Set A(2,1) = inf
        -   The resulting cost matrix is:
         -   The matrix is reduced:
         -   RCL = 0
         -   The cost of node 2 (Considering vertex 2 from vertex 1) is:
         Cost(2) = cost(1) + A(1,2) = 25 + 10 = 35
DESIGN AND ANALYSIS OF ALGORITHMS                                                     Page 101
     Choose to go to vertex 3: Node 3
         -   Cost of edge <1,3> is: A(1,3) = 17 (In the reduced matrix
         -   Set row #1 = inf since we are starting from node 1
         -   Set column # 3 = inf since we are choosing edge <1,3>
         -   Set A(3,1) = inf
         -   The resulting cost matrix is:
         Reduce the matrix: Rows are reduced
                            The columns are reduced except for column # 1:
                            Reduce column 1 by 11:
         The lower bound is: RCL = 11
         The cost of going through node 3 is:
         cost(3) = cost(1) + RCL + A(1,3) = 25 + 11 + 17 = 53
     Choose to go to vertex 4: Node 4
        Remember that the cost matrix is the one that was reduced at the starting vertex 1
        Cost of edge <1,4> is: A(1,4) = 0
        Set row #1 = inf since we are starting from node 1
        Set column # 4 = inf since we are choosing edge <1,4>
        Set A(4,1) = inf
        The resulting cost matrix is:
         Reduce the matrix: Rows are reduced
DESIGN AND ANALYSIS OF ALGORITHMS                                                      Page 102
                             Columns are reduced
         The lower bound is: RCL = 0
         The cost of going through node 4 is:
         cost(4) = cost(1) + RCL + A(1,4) = 25 + 0 + 0 = 25
     Choose to go to vertex 5: Node 5
        -   Remember that the cost matrix is the one that was reduced at starting vertex 1
        -   Cost of edge <1,5> is: A(1,5) = 1
        -   Set row #1 = inf since we are starting from node 1
        -   Set column # 5 = inf since we are choosing edge <1,5>
        -   Set A(5,1) = inf
        -   The resulting cost matrix is:
         Reduce the matrix:
                   Reduce rows:
                   Reduce row #2: Reduce by 2
         Reduce row #4: Reduce by 3
         Columns are reduced
         The lower bound is: RCL = 2 + 3 = 5
         The cost of going through node 5 is:
         cost(5) = cost(1) + RCL + A(1,5) = 25 + 5 + 1 = 31
DESIGN AND ANALYSIS OF ALGORITHMS                                                     Page 103
  In summary:
    So the live nodes we have so far are:
   2: cost(2) = 35, path: 1->2
   3: cost(3) = 53, path: 1->3
   4: cost(4) = 25, path: 1->4
   5: cost(5) = 31, path: 1->5
  Explore the node with the lowest cost: Node 4 has a cost of 25
  Vertices to be explored from node 4: 2, 3, and 5
  Now we are starting from the cost matrix at node 4 is:
     Choose to go to vertex 2: Node 6 (path is 1->4->2)
  Cost of edge <4,2> is: A(4,2) = 3
  Set row #4 = inf since we are considering edge <4,2>
  Set column # 2 = inf since we are considering edge <4,2>
  Set A(2,1) = inf
  The resulting cost matrix is:
  Reduce the matrix: Rows are reduced
                     Columns are reduced
  The lower bound is: RCL = 0
  The cost of going through node 2 is:
  cost(6) = cost(4) + RCL + A(4,2) = 25 + 0 + 3 = 28
DESIGN AND ANALYSIS OF ALGORITHMS                                  Page 104
     Choose to go to vertex 3: Node 7 ( path is 1->4->3 )
        Cost of edge <4,3> is: A(4,3) = 12
        Set row #4 = inf since we are considering edge <4,3>
        Set column # 3 = inf since we are considering edge <4,3>
        Set A(3,1) = inf
        The resulting cost matrix is:
         Reduce the matrix:
              Reduce row #3: by 2:
               Reduce column # 1: by 11
         The lower bound is: RCL = 13
         So the RCL of node 7 (Considering vertex 3 from vertex 4) is:
         Cost(7) = cost(4) + RCL + A(4,3) = 25 + 13 + 12 = 50
DESIGN AND ANALYSIS OF ALGORITHMS                                        Page 105
     Choose to go to vertex 5: Node 8 ( path is 1->4->5 )
        Cost of edge <4,5> is: A(4,5) = 0
        Set row #4 = inf since we are considering edge <4,5>
        Set column # 5 = inf since we are considering edge <4,5>
        Set A(5,1) = inf
        The resulting cost matrix is:
         Reduce the matrix:
         Reduced row 2: by 11
         Columns are reduced
         The lower bound is: RCL = 11
         So the cost of node 8 (Considering vertex 5 from vertex 4) is:
         Cost(8) = cost(4) + RCL + A(4,5) = 25 + 11 + 0 = 36
  In summary: So the live nodes we have so far are:
   2: cost(2) = 35, path: 1->2
   3: cost(3) = 53, path: 1->3
   5: cost(5) = 31, path: 1->5
   6: cost(6) = 28, path: 1->4->2
   7: cost(7) = 50, path: 1->4->3
   8: cost(8) = 36, path: 1->4->5
   Explore the node with the lowest cost: Node 6 has a cost of 28
   Vertices to be explored from node 6: 3 and 5
   Now we are starting from the cost matrix at node 6 is:
DESIGN AND ANALYSIS OF ALGORITHMS                                         Page 106
     Choose to go to vertex 3: Node 9 ( path is 1->4->2->3 )
        Cost of edge <2,3> is: A(2,3) = 11
        Set row #2 = inf since we are considering edge <2,3>
        Set column # 3 = inf since we are considering edge <2,3>
        Set A(3,1) = inf
        The resulting cost matrix is:
         Reduce the matrix: Reduce row #3: by 2
         Reduce column # 1: by 11
         The lower bound is: RCL = 2 +11 = 13
         So the cost of node 9 (Considering vertex 3 from vertex 2) is:
         Cost(9) = cost(6) + RCL + A(2,3) = 28 + 13 + 11 = 52
DESIGN AND ANALYSIS OF ALGORITHMS                                         Page 107
     Choose to go to vertex 5: Node 10 ( path is 1->4->2->5 )
        Cost of edge <2,5> is: A(2,5) = 0
        Set row #2 = inf since we are considering edge <2,3>
        Set column # 3 = inf since we are considering edge <2,3>
        Set A(5,1) = inf
        The resulting cost matrix is:
         Reduce the matrix:   Rows reduced
                              Columns reduced
         The lower bound is: RCL = 0
         So the cost of node 10 (Considering vertex 5 from vertex 2) is:
         Cost(10) = cost(6) + RCL + A(2,3) = 28 + 0 + 0 = 28
  In summary: So the live nodes we have so far are:
   2: cost(2) = 35, path: 1->2
   3: cost(3) = 53, path: 1->3
   5: cost(5) = 31, path: 1->5
   7: cost(7) = 50, path: 1->4->3
   8: cost(8) = 36, path: 1->4->5
   9: cost(9) = 52, path: 1->4->2->3
   10: cost(2) = 28, path: 1->4->2->5
   Explore the node with the lowest cost: Node 10 has a cost of 28
   Vertices to be explored from node 10: 3
   Now we are starting from the cost matrix at node 10 is:
DESIGN AND ANALYSIS OF ALGORITHMS                                          Page 108
     Choose to go to vertex 3: Node 11 ( path is 1->4->2->5->3 )
        Cost of edge <5,3> is: A(5,3) = 0
        Set row #5 = inf since we are considering edge <5,3>
        Set column # 3 = inf since we are considering edge <5,3>
        Set A(3,1) = inf
        The resulting cost matrix is:
         Reduce the matrix: Rows reduced
                            Columns reduced
         The lower bound is: RCL = 0
         So the cost of node 11 (Considering vertex 5 from vertex 3) is:
         Cost(11) = cost(10) + RCL + A(5,3) = 28 + 0 + 0 = 28
  State Space Tree:
DESIGN AND ANALYSIS OF ALGORITHMS                                          Page 109
  O/1 Knapsack Problem
  What is Knapsack Problem: Knapsack problem is a problem in combinatorial optimization,
  Given a set of items, each with a mass & a value, determine the number of each item to
  include in a collection so that the total weight is less than or equal to a given limit & the total
  value is as large as possible.
  O-1 Knapsack Problem can formulate as. Let there be n items, Z1 to Zn where Zi has value
  Pi & weight wi. The maximum weight that can carry in the bag is m.
  All values and weights are non negative.
  Maximize the sum of the values of the items in the knapsack, so that sum of the weights must
  be less than the knapsack’s capacity m.
  The formula can be stated as
  Xi=0 or 1 1 ≤ i ≤ n
  To solve o/1 knapsack problem using B&B:
    Knapsack is a maximization problem
   Replace the objective function           by the function                       to make it into a
  minimization problem
   The modified knapsack problem is stated as
     Fixed tuple size solution space:
  o   Every leaf node in state             space    tree   represents    an   answer     for    which
                                  is an answer node; other leaf nodes are infeasible
  o   For optimal solution, define
                                 for every answer node x
     For infeasible leaf nodes,
     For non leaf nodes
             c(x) = min{c(lchild(x)), c(rchild(x))}
   Define two functions ĉ(x) and u(x) such that for every
  node x,
                    ĉ(x) ≤ c(x) ≤ u(x)
DESIGN AND ANALYSIS OF ALGORITHMS                                                              Page 110
     Computing ĉ(·) and u(·)
  Algorithm ubound ( cp, cw, k, m )
  {
  // Input: cp: Current profit total
  // Input: cw: Current weight total
  // Input: k: Index of last removed item
  // Input: m: Knapsack capacity
  b=cp; c=cw;
  for i:=k+1 to n do{
       if(c+w[i] ≤ m) then {
              c:=c+w[i]; b=b-p[i];
       }
  }
  return b;
  }
DESIGN AND ANALYSIS OF ALGORITHMS           Page 111
UNIT V:
NP-Hard and NP-Complete problems: Basic concepts, non deterministic algorithms, NP -
Hard and NPComplete classes, Cook’s theorem.
Basic concepts:
NP Nondeterministic Polynomial time
   -)
The problems has best algorithms for their solutions have “Computing times”, that cluster
into two groups
                     Group 1                                        Group 2
        >   Problems with solution time bound by    >   Problems with solution times not
            a polynomial of a small degree.             bound by polynomial (simply non
                                                        polynomial )
        >   It also called “Tractable Algorithms”
                                                        These are hard or intractable
                                                    >   problems
            Most Searching & Sorting algorithms
        >   are polynomial time algorithms
                                                    >     None of the problems in this group
                                                          has been solved by any polynomial
        >   Ex:                                         time algorithm
                  Ordered Search (O (log n)),
                  Polynomial evaluation O(n)        >   Ex:
                                                        Traveling Sales Person O(n2 2n)
            Sorting O(n.log n)
                                                        Knapsack O(2n/2)
No one has been able to develop a polynomial time algorithm for any problem in the 2nd
group (i.e., group 2)
So, it is compulsory and finding algorithms whose computing times are greater than
polynomial very quickly because such vast amounts of time to execute that even moderate
size problems cannot be solved.
Theory of NP-Completeness:
Show that may of the problems with no polynomial time algorithms are computational time
algorithms are computationally related.
There are two classes of non-polynomial time problems
 1. NP-Hard
 2. NP-Complete
DESIGN AND ANALYSIS OF ALGORITHMS                                                         Page 112
                                            DESIGN AND ANALYSIS OF ALGORITHMS (UNIT-VIII)
NP Complete Problem: A problem that is NP-Complete can solved in polynomial time if
and only if (iff) all other NP-Complete problems can also be solved in polynomial time.
NP-Hard: Problem can be solved in polynomial time then all NP-Complete problems can be
solved in polynomial time.
All NP-Complete problems are NP-Hard but some NP-Hard problems are not know to be NP-
Complete.
Nondeterministic Algorithms:
Algorithms with the property that the result of every operation is uniquely defined are termed
as deterministic algorithms. Such algorithms agree with the way programs are executed on a
computer.
Algorithms which contain operations whose outcomes are not uniquely defined but are
limited to specified set of possibilities. Such algorithms are called nondeterministic
algorithms.
The machine executing such operations is allowed to choose any one of these outcomes
subject to a termination condition to be defined later.
To specify nondeterministic algorithms, there are 3 new functions.
Choice(S) arbitrarily chooses one of the elements of sets S
                   -)
Failure () Signals an Unsuccessful completion
             -)
Success () Signals a successful completion.
                  -)
Example for Non Deterministic algorithms:
Algorithm Search(x){                                        Whenever there is a set of choices
//Problem is to search an element x                         that leads to a successful completion
                                                            then one such set of choices is
 //output J, such that A[J]=x; or J=0 if x is not in A
                                                            always made and the algorithm
 J:=Choice(1,n);                                            terminates.
 if( A[J]:=x) then {
                                                            A Nondeterministic algorithm
                            Write(J);
                                                            terminates unsuccessfully if and
                            Success();                      only if (iff) there exists no set of
                        }                                   choices leading to a successful
                                                            signal.
 else{
                               write(0);
                               failure();
DESIGN AND ANALYSIS OF ALGORITHMS                                                         Page 113
                                         DESIGN AND ANALYSIS OF ALGORITHMS (UNIT-VIII)
                                   Nondeterministic Knapsack algorithm
            Algorithm DKP(p, w, n, m, r, x){               p given Profits
                                                             -)
            W:=0;                                          w given Weights
                                                                  -)
            P:=0;                                          n Number of elements (number of
                                                             -)
            for i:=1 to n do{                              p or w)
            x[i]:=choice(0, 1);                            m Weight of bag limit
                                                                  -)
            W:=W+x[i]*w[i];                                P Final Profit
                                                             -)
            P:=P+x[i]*p[i];                                W Final weight
                                                                       -)
            }
            if( (W>m) or (P<r) ) then Failure();
            else Success();
            }
The Classes NP-Hard & NP-Complete:
For measuring the complexity of an algorithm, we use the input length as the parameter. For
example, An algorithm A is of polynomial complexity p() such that the computing time of A
is O(p(n)) for every input of size n.
Decision problem/ Decision algorithm: Any problem for which the answer is either zero or
one is decision problem. Any algorithm for a decision problem is termed a decision
algorithm.
Optimization problem/ Optimization algorithm: Any problem that involves the
identification of an optimal (either minimum or maximum) value of a given cost function is
known as an optimization problem. An optimization algorithm is used to solve an
optimization problem.
P is the set of all decision problems solvable by deterministic algorithms in polynomial
  -)
time.
NP is the set of all decision problems solvable by nondeterministic algorithms in
       -)
polynomial time.
Since deterministic algorithms are just a special case of nondeterministic, by this we
concluded that P NP    ⊆
            Commonly believed relationship between P & NP
DESIGN AND ANALYSIS OF ALGORITHMS                                                        Page 114
                                       DESIGN AND ANALYSIS OF ALGORITHMS (UNIT-VIII)
 The most famous unsolvable problems in Computer Science is Whether P=NP or P≠NP
 In considering this problem, s.cook formulated the following question.
 If there any single problem in NP, such that if we showed it to be in ‘P’ then that would
 imply that P=NP.
 Cook answered this question with
 Theorem: Satisfiability is in P if and only if (iff) P=NP
 -)Notation of Reducibility
 Let L1 and L2 be problems, Problem L1 reduces to L2 (written L1 α L2) iff there is a way to
 solve L1 by a deterministic polynomial time algorithm using a deterministic algorithm that
 solves L2 in polynomial time
 This implies that, if we have a polynomial time algorithm for L2, Then we can solve L1 in
 polynomial time.
 Here α-) is a transitive relation i.e., L1 α L2 and L2 α L3 then L1 α L3
 A problem L is NP-Hard if and only if (iff) satisfiability reduces to L ie., Statisfiability α L
 A problem L is NP-Complete if and only if (iff) L is NP-Hard and L Є NP
          Commonly believed relationship among P, NP, NP-Complete and NP-Hard
Most natural problems in NP are either in P or NP-complete.
Examples of NP-complete problems:
   > Packing problems: SET-PACKING, INDEPENDENT-SET.
   > Covering problems: SET-COVER, VERTEX-COVER.
   > Sequencing problems: HAMILTONIAN-CYCLE, TSP.
   > Partitioning problems: 3-COLOR, CLIQUE.
   > Constraint satisfaction problems: SAT, 3-SAT.
   > Numerical problems: SUBSET-SUM, PARTITION, KNAPSACK.
DESIGN AND ANALYSIS OF ALGORITHMS                                                            Page 115
                                     DESIGN AND ANALYSIS OF ALGORITHMS (UNIT-VIII)
Cook’s Theorem: States that satisfiability is in P if and only if P=NP If
P=NP then satisfiability is in P
If satisfiability is in P, then P=NP
To do this
    > A-) Any polynomial time nondeterministic decision algorithm.
       I-)Input of that algorithm
       Then formula Q(A, I), Such that Q is satisfiable iff ‘A’ has a successful
      termination with Input I.
   > If the length of ‘I’ is ‘n’ and the time complexity of A is p(n) for some polynomial
      p() then length of Q is O(p3(n) log n)=O(p4(n))
       The time needed to construct Q is also O(p3(n) log n).
   > A deterministic algorithm ‘Z’ to determine the outcome of ‘A’ on any input ‘I’
       Algorithm Z computes ‘Q’ and then uses a deterministic algorithm for the
       satisfiability problem to determine whether ‘Q’ is satisfiable.
          > If O(q(m)) is the time needed to determine whether a formula of length ‘m’ is
       satisfiable then the complexity of ‘Z’ is O(p3(n) log n + q(p3(n)log n)).
           > If satisfiability is ‘p’, then ‘q(m)’ is a polynomial function of ‘m’ and the
       complexity of ‘Z’ becomes ‘O(r(n))’ for some polynomial ‘r()’.
   > Hence, if satisfiability is in p, then for every nondeterministic algorithm A in NP, we
      can obtain a deterministic Z in p.
   By this we shows that satisfiability is in p then P=NP
DESIGN AND ANALYSIS OF ALGORITHMS                                                    Page 116