DESIGN AND ANALYSIS OF ALGORITHMS
(20IS42)
Module 5
• By,
MANJESH R,
Assistant Professor,
IS&E, VVCE, Mysuru - 02
Back-Tracking & Branch-and-Bound
• Backtracking and branch-and-bound—that often make it possible to solve at
least some large instances of difficult combinatorial problems.
• Both strategies can be considered an improvement over exhaustive search.
• Unlike exhaustive search, they construct candidate solutions one component
at a time and evaluate the partially constructed solutions: if no potential
values of the remaining components can lead to a solution, the remaining
components are not generated at all.
• This approach makes it possible to solve some large instances of difficult
combinatorial problems, though, in the worst case, we still face the same
curse of exponential explosion encountered in exhaustive search.
• Both backtracking and branch-and-bound are based on the construction of a
state-space tree whose nodes reflect specific choices made for a solution’s
components.
• Both techniques terminate a node as soon as it can be guaranteed that no
solution to the problem can be obtained by considering choices that
correspond to the node’s descendants.
Dept. of ISE, VVCE, Mysuru - 02
• The techniques differ in the nature of problems they can be applied to.
• Branch-and-bound is applicable only to optimization problems because it is
based on computing a bound on possible values of the problem’s objective
function.
• Backtracking is not constrained by this demand, but more often than not, it
applies to non-optimization problems.
• The other distinction between backtracking and branch-and-bound lies in the
order in which nodes of the state-space tree are generated.
• For backtracking, this tree is usually developed depth-first (i.e., similar to DFS).
• Branch-and-bound can generate nodes according to several rules: the most
natural one is the so-called best-first rule (i.e., similar to BFS).
• Examples for backtracking are N-Queens problem, Hamiltonian Circuit and
Subset problem.
• Examples for branch-and –bound are Assignment problem, Knapsack problem
and Travelling salesman problem.
Dept. of ISE, VVCE, Mysuru - 02
Backtracking is used to find all possible Branch-and-Bound is used to solve
solutions available to a problem. When it optimisation problems. When it realises that it
realises that it has made a bad choice, it already has a better optimal solution that the
Approach
undoes the last choice by backing it up. It pre-solution leads to, it abandons that pre-
searches the state space tree until it has solution. It completely searches the state space
found a solution for the problem. tree to get optimal solution.
Backtracking traverses the state space Branch-and-Bound traverse the tree in any
Traversal
tree by DFS(Depth First Search) manner. manner, BFS.
Branch-and-Bound involves a bounding
Function Backtracking involves feasibility function.
function.
Backtracking is used for solving Decision Branch-and-Bound is used for solving
Problems
Problem. Optimisation Problem.
In Branch-and-Bound as the optimum solution
In backtracking, the state space tree is may be present any where in the state space
Searching
searched until the solution is obtained. tree, so the tree need to be searched
completely.
Efficiency Backtracking is more efficient. Branch-and-Bound is less efficient.
Useful in solving N-Queen Problem, Sum Useful in solving Knapsack Problem, Travelling
Application
of subset. Salesman Problem.
Dept. of ISE, VVCE, Mysuru - 02
Backtracking
• The exhaustive-search technique suggests generating all candidate solutions
and then identifying the one (or the ones) with a desired property.
• Backtracking is a more intelligent variation of this approach.
• The principal idea is to construct solutions one component at a time and
evaluate such partially constructed candidates as follows.
• If a partially constructed solution can be developed further without
violating the problem’s constraints, it is done by taking the first remaining
legitimate option for the next component.
• If there is no legitimate option for the next component, no alternatives
for any remaining component need to be considered.
• In this case, the algorithm backtracks to replace the last component of
the partially constructed solution with its next option.
• It is convenient to implement this kind of processing by constructing a tree of
choices being made, called the state-space tree.
Dept. of ISE, VVCE, Mysuru - 02
• Its root represents an initial state before the search for a solution begins.
The nodes of the first level in the tree represent the choices made for the
first component of a solution, the nodes of the second level represent the
choices for the second component, and so on.
• A node in a state-space tree is said to be promising if it corresponds to a
partially constructed solution that may still lead to a complete solution;
otherwise, it is called nonpromising.
• Leaves represent either nonpromising dead ends or complete solutions
found by the algorithm.
• In the majority of cases, a state-space tree for a backtracking algorithm is
constructed in the manner of depth-first search.
• If the current node is promising, its child is generated by adding the first
remaining legitimate option for the next component of a solution, and the
processing moves to this child.
• If the current node turns out to be nonpromising, the algorithm backtracks to
the node’s parent to consider the next possible option for its last component; if
there is no such option, it backtracks one more level up the tree, and so on.
• Finally, if the algorithm reaches a complete solution to the problem, it either
stops (if just one solution is required) or continues searching for other possible
solutions.
Dept. of ISE, VVCE, Mysuru - 02
Dept. of ISE, VVCE, Mysuru - 02
N-Queens problem
• The problem is to place n queens on an n × n chessboard so that no two
queens attack each other by being in the same row or in the same column or
on the same diagonal.
• For n = 1, the problem has a trivial solution, and it is easy to see that there is
no solution for n = 2 and n = 3.
• So let us consider the four-queens problem and solve it by the backtracking
technique.
• Since each of the four queens has to be placed in its own row, all we need to
do is to assign a column for each queen on the board.
Dept. of ISE, VVCE, Mysuru - 02
• We start with the empty board and then place queen 1 in the first possible
position of its row, which is in column 1 of row 1.
• Then we place queen 2, after trying unsuccessfully columns 1 and 2, in the first
acceptable position for it, which is square (2, 3), the square in row 2 and column
3.
• This proves to be a dead end because there is no acceptable position for queen 3.
• So, the algorithm backtracks and puts queen 2 in the next possible position at (2,
4).
• Then queen 3 is placed at (3, 2), which proves to be another dead end.
• The algorithm then backtracks all the way to queen 1 and moves it to (1, 2).
• Queen 2 then goes to (2, 4), queen 3 to (3, 1), and queen 4 to (4, 3), which is a
solution to the problem.
• If other solutions need to be found (how many of them are there for the four-
queens
• problem?), the algorithm can simply resume its operations at the leaf at which it
stopped.
• Alternatively, we can use the board’s symmetry for this purpose.
• Finally, it should be pointed out that a single solution to the n-queens problem for
any n ≥ 4 can be found in linear time.
Dept. of ISE, VVCE, Mysuru - 02
Dept. of ISE, VVCE, Mysuru - 02
Subset-sum problem
• The subset-sum problem: find a subset of a given set A = {a1, . . . , an} of n
positive integers whose sum is equal to a given positive integer d.
• For example, for A = {1, 2, 5, 6, 8} and d = 9, there are two solutions: {1, 2,
6} and {1, 8}.
• Of course, some instances of this problem may have no solutions.
• It is convenient to sort the set’s elements in increasing order. So, we will
assume that a1< a2 < . . . < an.
• The state-space tree can be constructed as a binary tree for the instance A
= {3, 5, 6, 7} and d = 15.
• The root of the tree represents the starting point, with no decisions about
the given elements made as yet.
• Its left and right children represent, respectively, inclusion and exclusion of
a1 in a set being sought.
Dept. of ISE, VVCE, Mysuru - 02
• Similarly, going to the left from a node of the first level corresponds to
inclusion of a2 while going to the right corresponds to its exclusion, and so
on.
• Thus, a path from the root to a node on the ith level of the tree indicates
which of the first i numbers have been included in the subsets represented
by that node.
• We record the value of s, the sum of these numbers, in the node. If s is equal
to d, we have a solution to the problem.
• We can either report this result and stop or, if all the solutions need to be
found, continue by backtracking to the node’s parent.
• If s is not equal to d, we can terminate the node as nonpromising if either of
the following two inequalities holds:
Dept. of ISE, VVCE, Mysuru - 02
Dept. of ISE, VVCE, Mysuru - 02
Branch and bound
• An optimization problem seeks to minimize or maximize some objective
function usually subject to some constraints.
• Note that in the standard terminology of optimization problems, a feasible
solution is a point in the problem’s search space that satisfies all the problem’s
constraints, whereas an optimal solution is a feasible solution with the best
value of the objective function.
• Compared to backtracking, branch-and-bound requires two additional items:
• a way to provide, for every node of a state-space tree, a bound on the best
value of the objective function on any solution that can be obtained by
adding further components to the partially constructed solution
represented by the node
• the value of the best solution seen so far
• If this information is available, we can compare a node’s bound value with the
value of the best solution seen so far.
• If the bound value is not better than the value of the best solution seen so far—
i.e., not smaller for a minimization problem and not larger for a maximization
problem—the node is nonpromising and can be terminated (some people say
the branch is “pruned”).
Dept. of ISE, VVCE, Mysuru - 02
• Indeed, no solution obtained from it can yield a better solution than the one
already available.
• This is the principal idea of the branch-and-bound technique.
• In general, we terminate a search path at the current node in a state-space tree
of a branch-and-bound algorithm for any one of the following three reasons:
• The value of the node’s bound is not better than the value of the best
solution seen so far.
• The node represents no feasible solutions because the constraints of the
problem are already violated.
• The subset of feasible solutions represented by the node consists of a
single point (and hence no further choices can be made)—in this case, we
compare the value of the objective function for this feasible solution with
that of the best solution seen so far and update the latter with the former
if the new solution is better.
Dept. of ISE, VVCE, Mysuru - 02
Assignment problem
• The problem of assigning n people to n jobs so that the total cost of the
assignment is as small as possible.
• The assignment problem is specified by an n × n cost matrix C so that we
can state the problem as follows:
• Select one element in each row of the matrix so that no two selected
elements are in the same column and their sum is the smallest possible.
• How can we find a lower bound on the cost of an optimal selection
without actually solving the problem?
• We can do this by several methods.
Dept. of ISE, VVCE, Mysuru - 02
• For example, it is clear that the cost of any solution, including an optimal one,
cannot be smaller than the sum of the smallest elements in each of the matrix’s
rows.
• For the instance here, this sum is 2 + 3+ 1+ 4 = 10.
• It is important to stress that this is not the cost of any legitimate selection (3
and 1 came from the same column of the matrix); it is just a lower bound on
the cost of any legitimate selection.
• We can and will apply the same thinking to partially constructed solutions. For
example, for any legitimate selection that selects 9 from the first row, the lower
bound will be 9 + 3 + 1+ 4 = 17.
• One more comment is in order before we embark on constructing the problem’s
state-space tree.
• It deals with the order in which the tree nodes will be generated.
• Rather than generating a single child of the last promising node as we did in
backtracking, we will generate all the children of the most promising node
among nonterminated leaves in the current tree. (Nonterminated, i.e., still
promising, leaves are also called live).
Dept. of ISE, VVCE, Mysuru - 02
• How can we tell which of the nodes is most promising?
• We can do this by comparing the lower bounds of the live nodes.
• It is sensible to consider a node with the best bound as most promising,
although this does not, of course, preclude the possibility that an optimal
solution will ultimately belong to a different branch of the state-space tree.
• This variation of the strategy is called the best-first branch-and-bound.
• So, returning to the instance of the assignment problem given earlier, we start
with the root that corresponds to no elements selected from the cost matrix.
• The lower-bound value for the root, denoted lb, is 10.
• The nodes on the first level of the tree correspond to selections of an element in
the first row of the matrix, i.e., a job for person a.
Dept. of ISE, VVCE, Mysuru - 02
• So we have four live leaves—nodes 1 through 4—that may contain an optimal
solution.
• The most promising of them is node 2 because it has the smallest lower-bound
value.
• Following our best-first search strategy, we branch out from that node first by
considering the three different ways of selecting an element from the second
row and not in the second column—the three different jobs that can be
assigned to person b.
Dept. of ISE, VVCE, Mysuru - 02
• Of the six live leaves—nodes 1, 3, 4, 5, 6, and 7—that may contain an optimal
solution, we again choose the one with the smallest lower bound, node 5.
• First, we consider selecting the third column’s element from c’s row (i.e.,
assigning person c to job 3); this leaves us with no choice but to select the
element from the fourth column of d’s row (assigning person d to job 4).
• This yields leaf 8, which corresponds to the feasible solution {a→2, b→1, c→3,
d →4} with the total cost of 13.
• Its sibling, node 9, corresponds to the feasible solution {a→2,b→1, c→4, d →3}
with the total cost of 25.
• Since its cost is larger than the cost of the solution represented by leaf 8, node
9 is simply terminated. (Of course, if its cost were smaller than 13, we would
have to replace the information about the best solution seen so far with the
data provided by this node.)
• Now, as we inspect each of the live leaves of the last state-space tree—nodes
1, 3, 4, 6, and 7—we discover that their lower-bound values are not smaller
than 13, the value of the best selection seen so far (leaf 8).
• Hence, we terminate all of them and recognize the solution represented by
leaf 8 as the optimal solution to the problem.
Dept. of ISE, VVCE, Mysuru - 02
Dept. of ISE, VVCE, Mysuru - 02
Knapsack problem
• We can apply the branch-and-bound technique to solving the knapsack problem.
• Given n items of known weights wi and values (profit) vi , i = 1, 2, . . . , n, and a
knapsack of capacity W, find the most valuable subset of the items that fit in the
knapsack.
• It is convenient to order the items of a given instance in descending order by their
value-to-weight ratios.
• Then the first item gives the best payoff per weight unit and the last one gives the
worst payoff per weight unit, with ties resolved arbitrarily:
v1/w1 ≥ v2/w2 ≥ . . . ≥ vn/wn.
• It is natural to structure the state-space tree for this problem as a binary tree.
• Each node on the ith level of this tree, 0 ≤ i ≤ n, represents all the subsets of n
items that include a particular selection made from the first i ordered items.
• This particular selection is uniquely determined by the path from the root to the
node: a branch going to the left indicates the inclusion of the next item, and a
branch going to the right indicates its exclusion.
• We record the total weight w and the total value v of this selection in the node,
along with some upper bound ub on the value of any subset that can be obtained
by adding zero or more items to this selection.
Dept. of ISE, VVCE, Mysuru - 02
• A simple way to compute the upper bound ub is to add to v, the total value of the
items already selected, the product of the remaining capacity of the knapsack
W−w and the best per unit payoff among the remaining items, which is vi+1/wi+1:
ub = v + (W − w)(vi+1/wi+1)
• As a specific example, let us apply the branch-and-bound algorithm to the same
instance of the knapsack problem. (We reorder the items in descending order of
their value-to-weight ratios, though.)
Dept. of ISE, VVCE, Mysuru - 02
Dept. of ISE, VVCE, Mysuru - 02
• At the root of the state-space tree, no items have been selected as yet.
• Hence, both the total weight of the items already selected w and their total
value v are equal to 0.
• The value of the upper bound computed by formula is $100.
• Node 1, the left child of the root, represents the subsets that include item 1.
• The total weight and value of the items already included are 4 and $40,
respectively; the value of the upper bound is 40 + (10 − 4) ∗ 6 = $76.
• Node 2 represents the subsets that do not include item 1.
• Accordingly, w = 0, v = $0, and ub = 0 + (10 − 0) ∗ 6 = $60.
• Since node 1 has a larger upper bound than the upper bound of node 2, it is
more promising for this maximization problem, and we branch from node 1
first.
• Its children—nodes 3 and 4—represent subsets with item 1 and with and
without item 2, respectively.
• Since the total weight w of every subset represented by node 3 exceeds the
knapsack’s capacity, node 3 can be terminated immediately.
• Node 4 has the same values of w and v as its parent; the upper bound ub is
equal to 40 + (10 − 4) ∗ 5 = $70.
Dept. of ISE, VVCE, Mysuru - 02
• Selecting node 4 over node 2 for the next branching (why?), we get nodes 5 and
6 by respectively including and excluding item 3.
• The total weights and values as well as the upper bounds for these nodes are
computed in the same way as for the preceding nodes.
• Branching from node 5 yields node 7, which represents no feasible solutions,
and node 8, which represents just a single subset {1, 3} of value $65.
• The remaining live nodes 2 and 6 have smaller upper-bound values than the
value of the solution represented by node 8.
• Hence, both can be terminated making the subset {1, 3} of node 8 the optimal
solution to the problem.
• Solving the knapsack problem by a branch-and-bound algorithm has a rather
unusual characteristic.
• Typically, internal nodes of a state-space tree do not define a point of the
problem’s search space, because some of the solution’s components remain
undefined.
• For the knapsack problem, however, every node of the tree represents a subset
of the items given.
• We can use this fact to update the information about the best subset seen so
far after generating each new node in the tree.
Dept. of ISE, VVCE, Mysuru - 02
Dept. of ISE, VVCE, Mysuru - 02
Dept. of ISE, VVCE, Mysuru - 02
Dept. of ISE, VVCE, Mysuru - 02
Dept. of ISE, VVCE, Mysuru - 02
Dept. of ISE, VVCE, Mysuru - 02
P, NP, and NP-Complete Problems
• In the study of the computational complexity of problems, the first concern of
both computer scientists and computing professionals is whether a given
problem can be solved in polynomial time by some algorithm.
• DEFINITION 1 We say that an algorithm solves a problem in polynomial time if
its worst-case time efficiency belongs to O(p(n)) where p(n) is a polynomial of
the problem’s input size n. (Note that since we are using big-oh notation here,
problems solvable in, say, logarithmic time are solvable in polynomial time as
well.) Problems that can be solved in polynomial time are called tractable, and
problems that cannot be solved in polynomial time are called intractable.
• Most problems discussed can be solved in polynomial time by some algorithm.
• They include computing the product and the greatest common divisor of two
integers, sorting a list, searching for a key in a list or for a pattern in a text
string, checking connectivity and acyclicity of a graph, and finding a minimum
spanning tree and shortest paths in a weighted graph.
Dept. of ISE, VVCE, Mysuru - 02
• Informally, we can think about problems that can be solved in polynomial time
as the set that computer science theoreticians call P.
• A more formal definition includes in P only decision problems, which are
problems with yes/no answers.
• DEFINITION 2 Class P is a class of decision problems that can be solved in
polynomial time by (deterministic) algorithms. This class of problems is called
polynomial.
• It is natural to wonder whether every decision problem can be solved in
polynomial time. The answer to this question turns out to be no. In fact, some
decision problems cannot be solved at all by any algorithm. Such problems are
called undecidable, as opposed to decidable problems that can be solved by
an algorithm.
• A famous example of an undecidable problem was given by Alan Turing in
1936.
• The problem in question is called the halting problem: given a computer
program and an input to it, determine whether the program will halt on that
input or continue working indefinitely on it.
Dept. of ISE, VVCE, Mysuru - 02
• DEFINITION 3 A nondeterministic algorithm is a two-stage procedure that
takes as its input an instance I of a decision problem and does the following:
– Nondeterministic (“guessing”) stage: An arbitrary string S is generated
that can be thought of as a candidate solution to the given instance I (but
may be complete gibberish as well).
– Deterministic (“verification”) stage: A deterministic algorithm takes both I
and S as its input and outputs yes if S represents a solution to instance I.
(If S is not a solution to instance I , the algorithm either returns no or is
allowed not to halt at all.)
• We say that a nondeterministic algorithm solves a decision problem if and
only if for every yes instance of the problem it returns yes on some execution.
(In other words, we require a nondeterministic algorithm to be capable of
“guessing” a solution at least once and to be able to verify its validity. And, of
course, we do not want it to ever output a yes answer on an instance for
which the answer should be no.) Finally, a nondeterministic algorithm is said
to be nondeterministic polynomial if the time efficiency of its verification
stage is polynomial.
Dept. of ISE, VVCE, Mysuru - 02
• DEFINITION 4 Class NP is the class of decision problems that can be solved by
nondeterministic polynomial algorithms. This class of problems is called
nondeterministic polynomial.
• Most decision problems are in NP.
• If a problem is in P, we can use the deterministic polynomial time algorithm that
solves it in the verification-stage of a nondeterministic algorithm that simply
ignores string S generated in its nondeterministic (“guessing”) stage.
• Informally, an NP-complete problem is a problem in NP that is as difficult as any
other problem in this class because, by definition, any other problem in NP can
be reduced to it in polynomial time.
• DEFINITION 5 A decision problem D1 is said to be polynomially reducible to a
decision problem D2, if there exists a function t that transforms instances of D1
to instances of D2 such that:
– t maps all yes instances of D1 to yes instances of D2 and all no instances of
D1 to no instances of D2
– t is computable by a polynomial time algorithm.
• This definition immediately implies that if a problem D1 is polynomially
reducible to some problem D2 that can be solved in polynomial time, then
problem D1 can also be solved in polynomial time.
Dept. of ISE, VVCE, Mysuru - 02
• DEFINITION 6 A decision problem D is said to be NP-complete if:
– it belongs to class NP
– every problem in NP is polynomially reducible to D.
• The notion of NP-completeness requires, however, polynomial reducibility of
all problems in NP, both known and unknown, to the problem in question.
Dept. of ISE, VVCE, Mysuru - 02
Dept. of ISE, VVCE, Mysuru - 02
Dept. of ISE, VVCE, Mysuru - 02