0% found this document useful (0 votes)
5 views141 pages

m4 Aad

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views141 pages

m4 Aad

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 141

MODULE IV

• Dynamic Programming
o Dynamic programming is an algorithm design method that can be used when the solution to a
problem can be viewed as the result of a sequence of decisions.
o Dynamic Programming is mainly an optimization over plain recursion.
o Wherever we see a recursive solution that has repeated calls for same inputs, we can optimize
it using Dynamic Programming.
o The idea is to simply store the results of subproblems, so that we do not have to re-compute
them when needed later.
o This simple optimization reduces time complexities from exponential to polynomial.
o For example, if we write simple recursive solution for Fibonacci Numbers, we get exponential
time complexity and if we optimize it by storing solutions of subproblems, time complexity
reduces to linear.
o In dynamic programming an optimal sequence of decisions is obtained by making explicit
appeal to the principle of optimality.

• The Optimality Principle


▪ Definition: The principle of optimality states that an optimal sequence of decisions has the
property that whatever the initial state and decision are, the remaining decisions must
constitute an optimal decision sequence with regard to the state resulting from the first
decision.
▪ A problem is said to satisfy the Principle of Optimality if the subsolutions of an optimal
solution of the problem are themesleves optimal solutions for their subproblems.
▪ Examples:
• The shortest path problem satisfies the Principle of Optimality.
• This is because if a,x1,x2,...,xn,b is a shortest path from node a to node b in a graph, then
the portion of xi to xj on that path is a shortest path from xi to xj.

• Characteristics of Dynamic Programming


1. Overlapping Subproblems
• Subproblems are smaller versions of the original problem. Any problem has
overlapping sub-problems if finding its solution involves solving the same subproblem
multiple times.
• Dynamic Programming also combines solutions to sub-problems. It is mainly used
where the solution of one sub-problem is needed repeatedly. The computed solutions
are stored in a table, so that these don’t have to be re-computed. Hence, this technique
is needed where overlapping sub-problem exists.
• For example, Binary Search does not have overlapping sub-problem. Whereas recursive
program of Fibonacci numbers have many overlapping sub-problems.

1
2. Optimal Substructure
• A given problem has Optimal Substructure Property, if the optimal solution of the given
problem can be obtained using optimal solutions of its sub-problems.
• For example, the Shortest Path problem has the following optimal substructure
property: If a node x lies in the shortest path from a source node u to destination
node v, then the shortest path from u to v is the combination of the shortest path
from u to x, and the shortest path from x to v.

Steps of Dynamic Programming


▪ Dynamic programming design involves 4 major steps:
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution, typically in a bottom-up fashion.
4. Construct an optimal solution from computed information.

Optimal matrix multiplication


▪ Suppose we wish to compute the product of 4 matrices A1 x A2 x A3 x A4
▪ The different parenthesizations are
( A1(A2( A3 A4)))
( A1((A2 A3) A4))
( (A1A2)( A3 A4))
(( A1(A2 A3))A4)
((( A1A2) A3)A4)

▪ We can multiply two matrices A and B if and only if they are compatible: The number of
columns of A must be equal to the number of rows of B.

▪ If A is a pXq matrix and B is a qXr matrix, the resulting matrix C is a pX r matrix.


▪ The time to compute C is pqr.
▪ We shall express costs in terms of the number of scalar multiplications

2
Example:
Consider 3 matrices A1, A2 and A3. Its dimensions are 10X100, 100X5, 5X50 respectively.
Compute the product (A1 A2 A3).

Ans) Two ways ((A1 A2 ) A3) and (A1 (A2 A3))

Number of scalar multiplications for


• ( (A1A2) A3) is 7500
• (A1 (A2A3)) is 75000

Thus, computing the product according to the first parenthesization is 10 times faster.
………………………………………………………………………………………………

Matrix-Chain Multiplication Problem:

• Given a chain (A1, A2, . . . An ) of n matrices, where for i = 1, 2, . . . n, matrix Ai has dimension
pi-1 X pi , fully parenthesize the product A1A2. . . An in a way that minimizes the number of scalar
multiplications
• In the matrix-chain multiplication problem, we are not actually multiplying matrices.
• Our goal is only to determine an order for multiplying matrices that has the lowest cost

Matrix Chain Multiplication : Dynamic Programming Method

Step 1: The structure of an optimal parenthesization


• Ai…j denote the matrix that results from evaluating the product AiAi+1 . . . Aj where i<= j
• If i< j, we must split the problem into two subproblems
(Ai Ai+1 . . . Ak and Ak+1Ai+1 . . . Aj ), for some integer k in the range i<= k < j
1. Compute matrix Ai. .k
2. Compute matrix Ak+1. .j.
3. Product, Ai. .j = Ai. .k X Ak+1. .j.
• Total cost = Cost of computing the matrix Ai. .k + Cost of computing Ak+1. .j + Cost of multiplying
them together.
3
• Step 2: A recursive solution
o We can define the cost of an optimal solution recursively in terms of the optimal
solutions to subproblems.
o Let m[i, j] be the minimum number of scalar multiplications needed to compute
the matrix Ai. .j
//m[1,n] -> from first matrix 1 to nth matrix, minimum
o m[1, n] be the lowest cost to compute A1. .n how many scalar multplications are required
o Ai. .i = Ai so m[i,i] = 0 for i=1,2, ...... n //only 1 matrix A

o s[i,j] be a value of k at which we split the product AiAi+1 . . . Aj in an optimal


parenthesization.
dimensions p
o This will take exponential time m[i,k] -> p(i-1) x pk
m[k+1,j] -> pk x pj

• Step 3: Computing the optimal costs


o Compute the optimal cost by using a tabular, bottom-up approach

Algorithm Matrix_Chain_Order(p)
p- size of matrix
{ n = p.length – 1
Let m[1..n, 1..n] and s[1..n-1 , 2..n] be new tables n-total no. of matrices
for i=1 to n do m[ ]-min no. of scalar multiplications
m[i,i] = 0 //diagonal elements
for x=2 to n do s[ ] - from where to split
{ for i=1 to n-x+1 do
{ j=i+x-1
m[i,j] = α x=2 means it is 2 matrix combination
for k=i to j-1 do x=2, i=1, j=2
x- matrix combination x=2, i=2, j=3
(i,j) - array index { q = m[i,k] + m[k+1,j] + pi-1 pk pj
k- split point if q<m[i,j] then x=3, i=1, j=3
{ m[i,j] = q x=3, i=2, j=4
p - dimensions - scalar s[i,j] = k x=4, i=1, j=4
} x=4, i=2, j=5
}

}
}
return m[][] and s[][]
}

o Matrix Ai has dimensions pi-1 X pi for i =1,2, . . . n.


o Input to this algorithm is a sequence p = ( p0, p1, ........ pn ), where p.length = n + 1.
o The procedure uses 2 auxiliary tables
o m[1 .. n , 1. n] for storing the cost of matrix multiplication
o s[1..n-1 , 2 ...n] records which index of k achieved the optimal cost in computing
m[i,j] .

4
• Step 4: Constructing an optimal solution

Algorithm Print_Optimal_Parens(s,i,j)
{
if i==j then
print “A”i

else
print “(“
print_Optimal_Parens(s,i,s[i,j]) //recursive call at split point
print_Optimal_Parens(s,s[i,j]+1,j)
print “)”
}

4x4 matrix

1. Given a chain of 4 matrices <A1,A2,A3,A4> with dimensions


<5X4>,<4X6>,<6X2>,<2X7> respectively. Using Dynamic programming find the
minimum number of scalar multiplications needed and also write the optimal
multiplication order.

m - minimum no. of scalar multiplications s-split point put diagonal elements as 0

take 2 combination

A1 A2 Matrix was cut/ split after 1 [A1 x A2]


5X4 4X6 therefore in corresponding cell in s
place 1.
5X4X6 = 120 ie; place 1 in s(1,2)
A(1,2) = 120 -> Place in m(1,2)
5
cut at 2, [A2 x A3]
A2 x A3 therefore place 2 in s(2,3)
4x6x2=48 take 2 combination

m(2,3)=48

A3 x A4 take 2 combination
cut at 3, [A3xA4]
6x2x7=84 therefore place 3 in s(3,4)

m(3,4)=84

take 3 combination [A1, A2, A3]


2 ways to cut
cut at 1
A1 X (A2XA3)

cut at A1 cut at A2

(scalar) min

5x4x2=40 40+48=88 120+60=180 place at m(1,3)


5x6x2=60
take 3 combination

cut at 3 in s(2,4)

cut at 2 cut at 3

4x6x7=168 168+84 48+56


4x2x7=56
6 min
place at m(2,4)
now 4 combinmations

140+144 120+84+210
5x4x7
scalar 5x6x7=210

cut at 3 in s(1,4)

optimal
5x2x7= 88+70

Depending on s[ ], we find the position to put parenthesis.


A1 A2 A3 A4 - to check where to cut/split in A1 to A4

Look right most top end cell


s(1,4) = 3 , cut at 3 ---> (A1 A2 A3)A4

Now to cut A1 to A3, look s(1,3)


s(1,3)=1, cut at 1 ---> ((A1 (A2,A3))(A4))
Minimum number of scalar operations m(1,4)=158

7
All Pairs Shortest Path Algorithm

The path originates from vertex i and goes through some intermediate vertices and terminates
at vertex j. A(i,j) = cost(i,j) and we can obtain a recurrence for Ak (i,j) as,
Ak (i,j) = min {(Ak-1 (i,j), Ak-1 (i,k) + Ak-1 (k,j)}, k>=1
• Find the shortest distances between every pair of vertices in a given weighted directed
Graph
• The Floyd Warshall Algorithm is for solving the All Pairs Shortest Path problem.
Negative edge weights are also allowed.
• As a result of this algorithm, it will generate a matrix, which will represent the minimum
distance from any node to all other nodes in the graph.

Floyd Warshall Algorithm


Inputs are the adjacency matrix of the given graph and total number of vertices
Algorithm FloydWarshall(cost[][], n)
{
for i=1 to n do
for j=1 to n do
D[i, j] = cost[i, j]
for k := 1 to n do
for i := 1 to n do
for j := 1kto n do k-1 k-1 k-1
D[i, j] = min{D[i, j] , D[i, k] + D[k, j] }
Return D
}

Time Complexity
• Floyd Warshall Algorithm consists of three loops over all the nodes. Each loop has
constant complexities.
• Hence, the time complexity of Floyd Warshall algorithm = O(n3), where n is the number
of nodes in the given graph.
1 2 3 4 D(i,i)=0 //diagonal
1
2 D(i,j)=length/cost of edge (i,j)
k=0
3 D(i,j)=infinity //if no edge
4

k=1

k=2

k=3
k=4

Example
• Back Tracking
o Backtracking method expressed the solution as n-tuple (x1, x2.,…… xn), where xi‟s are chosen
from some finite set Si.
o The problem to be solved calls for finding one vector that maximizes (or minimizes or
satisfies) a criterion function P(x1, x2.,…… xn).
o Examples: Sorting the array of integers in a[l: n]
▪ The solution to the problem is expressed an n-tuple, where xi is the index of the ith
smallest element.
▪ The criterion function P is: a[xi] ≤ a[xi+1], for 1≤ i < n.
▪ The set Si ={1,2, ......... n}
▪ Different methods for solving this problem
• Brute Force approach
o Suppose mi is the size of set Si.
o The number of tuples (with size n) that are possible candidates for satisfying the
function P is: m = m1 x m2 x m3...................x mn
o Brute Force approach evaluates each one with P, and save those which yield the
optimum.

• Backtracking algorithm
o It yields the same answer with far fewer than m trials.
o Its basic idea is to build up the solution vector one component at a time and to use
modified criterion functions (bounding functions) Pi(x1, x2.,…… xi ) to test
whether the vector being formed has any chance of success.
o The major advantage of this method is that if it is realized that the partial vector
(x1, x2.,…… xi) can in no way lead to an optimal solution, then mi+1 x mi+2. . . . .
. x mn possible test vectors can be ignored entirely.

o Backtracking method require that all the solutions satisfy a complex set of constraints. These
constraints can be divided into two categories:

▪ Explicit Constraints
• Explicit constraints are rules that restrict each xi to take on values only from a given
set
• The explicit constraints depend on the particular instance I of the problem being
solved. All tuples that satisfy the explicit constraints define a possible solution space
for I. explicit - values that are possible for xi
• Example:

▪ Implicit Constraints
• These are rules that determine which of the tuples in the solution space of I satisfy the
criterion function (Bounding Function). implicit - the answers in solution space

o N-Queens Problem
▪ n queens are to be placed on a n x n chessboard so that no two attack. That is, no two
queens are on the same row, column, or diagonal.
▪ Number the rows and columns of the chessboard 1through n.
▪ The queens can also be numbered 1through n.
1
▪ Since each queen must be on a different row, we can assume that queen i is to be placed
on row i.
▪ All solutions to the n-queens problem can therefore be represented as n-tuples(x1,
x2.,…… xn), where xi is the column on which queen i is placed.
▪ Explicit constraint: Si = {1,2,3, ......... n }, 1≤ i ≤n size of s is n n
• The solution space contains |S1| x |S2| x. . . . . . . x |Sn| = n tuples.
n therefore, nxnxn...= n
▪ Implicit constraints:
• No two xi‟s can be the same(i.e. all queens must be on different columns)
o The solution space contains |S1| x |S2| x . . . . . . . x |Sn| = n(n-1)........... 1 = n! tuples
o It reduces the solution space from nn to n!. first Q is placed in n options
• No two queens can be on the same diagonal. second Q can be placed in (n-1) options
etc upto 1, therefore n!
4 Queen’s solution
(2,4,1,3) (3,1,4,2)
1 2 3 4
1 2 3 4
1 Q1 1 Q1
2 Q2 2 Q2
3 Q3
3 Q3
4 Q4
4 Q4
▪ Following is a tree organization (permutation tree/State Space Tree) for 4-queen
problem without considering diagonal collision.
first Q(x1) has 4 options,
1/2/3/4th column

if Q(x1) is placed in 1st column,


then Q(x2) has only 2/3/4th column

if Q(x1) is in 1st column,


Q(x2) is in 2nd column,
then Q(x3) can be placed in
3/4th column

(2,4,1,3) (3,1,4,2)
Fig: Tree organization of the 4-queen’s solution space. Nodes are numbered in depth first search
• The edges are labeled by possible values of xi.
• Edges from level 1 to level 2 nodes specify the values for x1
• Edges from level i to level i +1 are labeled with the values of xi
• Thus, the leftmost sub-tree contains all solutions with x1 = 1; its leftmost sub-tree
contains all solutions with x1 = 1 and x2 = 2, and soon.
• The solution space is defined by all paths from the root node to a leaf node. There are
n! = 4!= 24 leaf nodes in the above tree.

o Permutation tree or State Space Tree


▪ Tree organization of solution space is called state space tree.
▪ Problem State: Each node in this tree defines a problem state.
▪ State Space of the problem: All paths from the root to other nodes define the State
Space of the problem.
▪ Solution States: Those problem states s for which the path from the root to s defines a
tuple in the solution space.
▪ Answer states: Those solution states s for which the path from the root to s defines a
tuple that satisfies all implicit constraints of the problem. (2,4,1,3) and (3,1,4,2) – red nodes
2
creating a binary tree with nodes 1 to 15,
3 levels only.
Different ways to generate nodes
1. Depth First Generation (Stack data structure)
2. Breadth First generation (Queue data structure)
3. D-search (Stack data structure)

▪ Problem States can be generated by:


• Depth First generation of the problem states:
o As soon as a new child C of the current E-node R is generated, this child will
become the new E-node.
o Then R will become the E-node again when the sub-tree C has been fully
explored.

o Backtracking: Depth first node generation with bounding functions.

• Breadth First generation of the problem states:


o The -E-node remains the E-node until it is dead.
o Branch-and-bound methods: Breadth first node generation method with
bounding function is called Branch and Bound method. There are two alternatives

▪ Breadth First Generation Method: Each new node is placed into a queue.
When all the children of the current-E-node have been generated, the next
node at the front of the queue becomes the new E-node.

• D-search(depth search): Each new node is placed into a stack. When all the children of
the current-E-node have been generated, the next node at the top of the stack becomes the
new E-node.

▪ At the conclusion of the process at least one answer node is always generated or all
answer nodes are generated if the problem requires us to find all solutions.

3
▪ Live Node: A node which has been generated and all of whose children have not yet been
generated is called a live node.

▪ E-node : The live node whose children are currently being generated is called the E-node

▪ Dead node: It is a generated node which is not to be expanded further or all of whose
children have been generated.

▪ Bounding functions are used to kill live nodes without generating all their children.

o Backtracking works on 4-Queens Problem

▪ If (x1, x2.,…… xi) is the path to the current E-node, then all children nodes with parent-
child labeling xi+1 are such that (x1, x2.,…… xi+1) represents a chessboard configuration in
which no two queens are attacking.
▪ We start with the root node as the only live node. This becomes the E-node and the path is
(). We generate one child(node 2) and the path is now (1). This corresponds to placing
queen1on column1.
▪ Node2 becomes the E-node. Node3 is generated and immediately killed.
▪ The next node generated is node8 and the path becomes(1,3). Node8 becomes the E-node.
However, it gets killed as all its children represent board configurations that cannot lead
to an answer node.
▪ We back track to node2 and generate another child node13.The path is now (1,4). This
process continues until it will generate a proper arrangement.
4
Backtracking - N Queens Problem

bounding function B - diagonal attack

State Space Tree of 4 Queens Problem

5
o Backtracking Control Abstraction
▪ (x1, x2.,…… xi) be a path from the root to a node in a state space tree.
▪ Generating Function T(x1, x2.,…… xi) be the set of all possible values for xi+1 such that
(x1, x2.,…… xi+1) is also a path to a problem state.
▪ T(x1, x2.,…… xn) = φ
▪ Bounding function Bi+1(x1, x2.,…… xi+1) is false for a path (x1, x2.,…… xi+1) from the
root node to a problem state, then the path cannot be extended to reach an answer node.
▪ Thus the candidates for position i+1of the solution vector (x1, x2.,…… xn) are those
values which are generated by T and satisfy Bi+1.
▪ The recursive version is initially invoked by Backtrack(1).

▪ Backtracking Control Abstraction


Algorithm Backtrack(k)
{
for (each x[k] ϵ T(x[1], ....... x[k-1])
{
if(Bk(x[1], x[2], ........... , x[k]) != 0) then // true
{
if(x[1], x[2], ........... , x[k] is a path to an answer node)
then write(x[1:k]) //write the path
if(k<n) then Backtrack(k+1)
} x[1:n] - solution vector
} T() - generatingfunction
} B() - bounding function

o All the possible elements for the kth position of the tuple that satisfy Bk are generated one by
one, and adjoined to the current vector (x1, x2.,…… xk-1 ).
o Each time xk is attached, a check is made to determine whether a solution has been found.
Then the algorithm is recursively invoked.
o When the for loop is exited, no more values for xk exist and the current copy of Backtrack
ends. The last unresolved call now resumes.
o This algorithm causes all solutions to be printed. If only a single solution is desired, then a flag
can be added as a parameter to indicate the first occurrence of success.

• N-Queens Problem(Cont…)
o Consider an n x n chessboard and try to find all possible way to place n non-attacking
queens.
o (x1, x2.,…… xn ) be the solution vector. Queen i is placed in ith row and x ith column. xi will
all be distinct since no two queens can be placed in the same column.
o There are 2 type of diagonals
▪ Positive Diagonal
• Diagonal from upper left to lower right
• Every element on the same diagonal has the same row-column value
• Suppose 2 queens are place at position (i,j) and (k,l), then i-j = k-l
▪ Negative Diagonal
• Diagonal from upper right to lower left
• Every element on the same diagonal has the same row+column value
• Suppose 2 queens are place at position (i,j) and (k,l), then i+j = k+l
▪ The 1st equation implies: i-k = j-l
▪ The 2 equation implies:
nd
i-k = l - j
6
▪ Combining these two, we will get : | i-k | = | j-l | // diagonal attack condition
Absolute value of column difference is equal to the absolute value of row difference.
Algorithm NQueens(k,n)
{ k - queen
for i=1 to n do n - columns
{ if Place(k,i) then
Place(k,i) means Qk in column i
{ x[k] = i
if(k==n) then write(x[1:n])
else NQueens(k+1, n)
}
}
}

Algorithm Place(k,i)
{
for j=1 to k-1 do
{
if( (x[j]==i) or ( Abs(j-k)==Abs(x[j]-i) ) then // check column and diagonal attack
Return false
}
Return true
}

o Place(k,i) returns true if the kth queen can be placed in column i.


} ▪ i should be distinct from all previous values x[1],x[2], . . . . . . ., x[k-1]
▪ And no 2 queens are to be on the same diagonal
▪ Time complexity of Place() is O(k)
o NQueen() is initially invoked by NQueen(1,n)
▪ Time complexity of NQueen() is O(n!)
8 Queen’s problem

7
• Branch and Bound Technique
o During state space tree generation E-node remains E-node until it is dead.
o Two strategies:
▪ Breadth First Search(BFS)
• It is called FIFO(First In First Out). Here the live nodes are placed in a
queue.
▪ Depth Search(D-Search)
• It is called LIFO(Last In First Out). Here the live nodes are placed in a
stack.
o Least Cost Search(LC Search)
▪ To improve the searching speed, we can use a ranking function ĉ(.) for live
nodes.
▪ ĉ(.) value of each live node is calculated. The next E-node is the live node with
least ĉ(.). Such a search strategy is called LC Search.
▪ BFS and D-Search are the special cases of LC-Search.
▪ LC-Search coupled with bounding function is called LC Branch and Bound
Search.
o LC-Search Control Abstraction
Algorithm LCSearch(t) t -node
{ if t is an answer node then output t and return
E = t // set t as E node
Initialize the list of live nodes to be empty
repeat x-> child
{ for each child x of E do
{
if x is an answer node then output the path from x to t and return
Add(x) // if x is not answer node, add it to live node list
x → parent = E
}
if there are no more live nodes then
{ Write “no answer node”
return
}
E = Least() //finds the node with min cost and set it as next E
}until(false)
}
• Least() finds a live node with least ĉ. This node is deleted from the list of live nodes and returned.
▪ Add(x) adds the new live node x to the list of live nodes.
▪ LCSearch outputs the path from the answer node to the root t.
▪ LCSearch terminates only when either an answer node is found or the entire
state space tree has been generated and searched.
▪ The control abstraction for LC, FIFO and LIFO are same. The only difference is
the implementation of the list of live nodes.
• FIFO Search scheme:
o The list of live nodes is implemented as queue.
o Least() and Add(x) being algorithms to delete an element from
and add an element to the queue.
• LIFO Search scheme:
8
o The list of live nodes is implemented as stack.
o Least() and Add(x) being algorithms to delete an element from
and add an element to the stack.
• LC-Search Scheme:
o Add(x) is an algorithm to add elements to the list. Least() returns
a live node with least ĉ(.) from the list.

o Branch and Bound Algorithm for Travelling Salesman Problem


• Given a set of cities and distance between every pair of cities, the problem is to find the
shortest possible tour that visits every city exactly once and returns to the starting point.
• Example: Apply branch and bound algorithm to solve TSP for the following graph,
assuming the start city as „a‟. Draw the state space tree.

From a, visit b, c, d once and return to a.

• Solution
• The adjacency matrix is

no edge = infinity

• Perform row reduction, then column reduction

min value in each row


min value in each column
cost of node 1 = Total cost reduced = 2+2+1+1+0+0+0+0 = 6
row column
The state space tree is

M1 is the matrix for node 1 is


• Generate the child node of node 1 now E node is 1, generate child nodes of 1

1 means city a. From a, we can go to b,c,d (2,3,4)

(1) find cost of 2,3,4


(2) find corresponding matrix of 2,3 4

• Find the matrix and cost of node 2 2 is generated from 1


so take matrix of 1 ie; M1
o Set row a and column b elements are α
o Set M1[b, a]= α //path to root node a a to b is the path
9 so set row a and column b as infinity
M1[b,a]=infinity
o The resultant matrix is

o Perform row reduction, then column reduction

r =1+4
Cost reduced = r = 5
M2 is the matrix for node 2 parent just parent node
Cost of node 2 = Cost of node 1 + M1[a, b] + r = 6 + 0 + 5 = 11
• Find the matrix and cost of node 3
o Set row a and column c elements are α
o Set M1[c, a]= α //path to root node a 3 is generated from 1
o The resultant matrix is so take matrix of 1 ie; M1

a to c is the path
so set row a and column c as infinity
M1[c,a]=infinity

o Perform row reduction, then column reduction

Cost reduced = r = 2
M3 is the matrix for node 3parent just parent node
Cost of node 3 = Cost of node 1 + M1[a, c] + r = 6 + 3 + 2 = 11
• Find the matrix and cost of node 4
o Set row a and column d elements are α
o Set M1[d, a]= α //path to root node a
o The resultant matrix is

o Perform row reduction, then column reduction

10
Cost reduced = r = 6
M4 is the matrix for node 4 just parent node
parent
Cost of node 4 = Cost of node 1 + M1[a, d] + r = 6 + 5 + 6 = 17

o Now the state space tree is

Now the live nodes are 2, 3 and 4. Minimum cost is for node 2 and 3. Choose
one node(say node 2) as the next E-node.
o Generate the child node of node 2
find cost and matrix of 5,6

Node 2 is E node ie;b

From a path to b,
From b , generate child nodes c,d ie; 5,6

• Find the matrix and cost of node 5


o Set row b and column c elements are α Parent of 5 is 2, so take matrix of 2 ie; M2
o Set M2[c, a]= α //path to root node a
o The resultant matrix is b to c is the path

set row b and column c as infinity


M2[c,a]= infinity

o Perform row reduction, then column reduction

Cost reduced = r = 2
M5 is the matrix for node 5parent just parent
Cost of node 5 = Cost of node 2 + M21[b, c] + r = 11 + 5 + 2 = 18
• Find the matrix and cost of node 6
o Set row b and column d elements are α
o Set M2[d, a]= α //path to root node a
11
o The resultant matrix is

o Perform row reduction, then column reduction

Cost reduced = r = 0
M6 is the matrix for node 6parent just parent
Cost of node 6 = Cost of node 2 + M21[b, d] + r = 11 + 0 + 0 = 11

o Now the state space tree is

Now the live nodes are 5, 6, 3 and 4. Choose one node which having minimum
cost(say node 6) as the next E-node.
o Generate the child node of node 6

a -> b -> d
therefore d's child is c

• Find the matrix and cost of node 7 parent of 7 is 6


o Set row d and column c elements are α
o Set M6[c, a]= α //path to root node a so take M6
12
o The resultant matrix is

o Perform row reduction, then column reduction

Cost reduced = r = 0
M7 is the matrix for node 7 just parent
Cost of node 7 = Cost of node 6 + M6[d, c] + r = 11 + 0 + 0 = 11

o Now the state space tree is

Now the live nodes are 5, 7, 3 and 4. Choose one node which having minimum
cost(say node 7) as the next E-node. Node 7 having no child node. because a - b - d - c
Now we can say that node 1 to node 7 path is the Traveling salesperson path.

The TSP path = a-b-d-c-a


The TSP cost = 11 (cost of node 7)

13

You might also like