Design and Analysis of
Algorithms
UNIT 5
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Limitations of Algorithm Power
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
1. Introduction to Lower-Bound
Arguments:
1. Lower-bound arguments assess the
Lower Bound efficiency of algorithms by
comparing them to other
Arguments algorithms solving the same
problem.
1. It's essential to understand both the
efficiency class of an algorithm and
the minimum number of operations
required to solve a problem.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
2. Trivial Lower Bounds: 3. Information-Theoretic Arguments:
1. The simplest method for obtaining 1. These arguments establish lower
a lower-bound class is by counting bounds based on the amount of
the number of input and output
information an algorithm needs
items processed by any algorithm.
to produce.
2. F o r e x a m p l e , g e n e r a t i n g a l l
2. F o r e x a m p l e , i n a n u m b e r
permutations of n items requires
at least O(n!) operations because g ues sing ga me , an al go ri t hm
there are n! possible permutations. n ee ds at le a st l o g 2 n s t ep s t o
guess a number between 1 and n.
3. Similarly, evaluating a polynomial
of degree n requires processing all 3. Each question in the game yields
n coefficients, resulting in a lower at most 1 bit of information,
bound of O(n). hence the lower bound of log 2 n
steps.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
4. Adversary Arguments: 5. Problem Reduction:
1. Adversary arguments involve playing 1. T h i s a p p r o a c h c o m p a r e s t h e
the role of a hostile adversary to complexity of one problem to another
prove lower bounds. by reducing one problem to the other.
2. For example, in merging two sorted
2. I f p r o b l e m P c a n b e r e d u c e d t o
lists, an adversary can force any
correct algorithm to make at least problem Q, and Q has a known lower
2n - 1 key comparisons. bound, then that lower bound applies
to problem P as well.
3. B y p r o v i d i n g s p e c i f i c r u l e s f o r
comparisons, the adversary ensures 3. For example, the Euclidean minimum
that the algorithm follows the most spanning tree problem can be
time-consuming path. reduced to the element uniqueness
problem, establishing a lower bound
4. Adversary arguments are used in of O(n log n).
algorithm analysis to establish lower
bounds by showing that no algorithm
can perform better than a certain
level of difficulty set by the adversary By employing these different methods, we
c a n e s t a bl i s h lo w e r b o u nd s f o r v a r i o u s
problems, providing insights into the
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept inherent complexity of algorithmic tasks.
Decision Trees Overview:
1. D e c i s i o n t r e e s a r e a v i s u a l
Decision Trees representation of how an algorithm
makes decisions based on comparisons
of input elements.
2. E a c h i n t e r n a l n o d e i n t h e t r e e
represents a comparison operation,
usually denoted as a condition (e.g.,
"a < b").
3. T h e b r a n c h e s f r o m e a c h n o d e
represent the possible outcomes of
the comparison (e.g., "yes" or "no").
4. The leaves of the tree represent the
possible outcomes or final states of
the algorithm for a given input.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Analyzing the Decision Tree:
Lower Bound on Decision Tree Height:
• The height of the decision tree 1. Inequality, which states ℎ ≥log 2 𝑙 ,
corresponds to the maximum number
provides a lower bound on the height
of comparisons needed to reach a
(or depth) of binary decision trees.
final state.
• T h e wo r st -c a s e s c e n a r i o f o r t h e 2. Here, ℎ represents the height of the
algorithm occurs when it follows the tree, and 𝑙 represents the number of
longest path from the root to a leaf leaves (final outcomes) in the tree.
node.
3. This inequality implies that the height
• The number of comparisons made by of a decision tree must be at least
the algorithm in the worst case is log 2 of the number of its leaves.
equal to the height of the decision
tree.
4. I n o t h e r w o r d s , i t p r o v i d e s a
benchmark for assessing the
• The height of a binary tree with l
leaves is at least log2(l), as performance of comparison-based
determined by inequality algor ith ms: t hey ca nnot be mo re
efficient than this lower bound.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Application to Sorting and Searching
Algorithms:
1. Decision trees can be used to analyze
t he pe rf orm a nc e o f sor ti ng a n d
searching algorithms by considering
the number of comparisons they
make.
2. By constructing decision trees for
these algorithms, we can determine
their worst-case time complexity
based on the height of the trees and
the number of leaves.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● Can a computer solve all
computational problems? Currently,
the answer is no, but for solvable
problems, how much time would it
take?
P, NP, and NP-Complete ● In computer science, there exist some
Problems problems whose solutions are not yet
found, the problems are divided into
classes known as Complexity Classes
● P, NP, NP-Hard and, NP-Complete
are the complexity classes/set for the
problems. Any solvable computational
problem falls into any one of these
categories.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
NP (Nondeterministic Polynomial Time):
P (Polynomial Time):
• NP refers to the class of decision problems
• P refers to the class of decision where a proposed solution can be verified
p r ob le m s (p r o bl e ms w i t h y e s/ no in polynomial time.
answers) that can be solved by
algorithms running in polynomial • In other words, if someone gives you a
time. solution, you can quickly check whether
it's correct or not.
• In simpler terms, these are problems
where the time it takes to solve them • However, finding the solution itself may
grows at most polynomially with the
not be easy or may require exponential
size of the input.
time.
• For example, sorting a list of numbers
or finding the shortest path i n a • An exampl e of an N P pr o bl em i s t he
graph are problems that belong to P traveling salesman problem, where given a
because algorithms like quicksort and list of cities and distances between them,
Dijkstra's algorithm can solve them in it's easy to verify if a proposed route visits
polynomial time. each city exactly once and returns to the
starting city, but finding the shortest
route is computationally difficult.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
NP-Complete Problem:
NP-Hard:
• N P -c o m p l e t e (N P C ) p r o b l e ms a r e a
special class of NP problems that are • NP-Hard problems are at least as hard as
believed to be among the hardest in NP. the hardest problems in NP but may not
necessarily be in NP themselves.
• If you can find a polynomial-time
algorithm for any one of these problems, • Essentially, an NP-Hard problem is a
you can solve all NP problems in problem for which there is no known
polynomial time. polynomial-time algorithm and is at least
as hard as any problem in NP.
• An example of an NP-complete problem is
the Boolean satisfiability problem (SAT), • An example of an NP-Hard problem is the
where you're given a Boolean formula and halting problem, which asks whether a
asked if there's an assignment of truth given program halts or runs forever on a
values to its variables that makes the given input. It's undecidable, meaning no
whole formula true. If you can solve SAT algorithm can solve it for all possible
in polynomial time, you can solve any NP inputs.
problem in polynomial time.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
P Deterministic Polynomial Basic math, string operations, Easy
Time sorting, shortest path algorithms,
linear and binary search
algorithms
NP Non Polynomial Integer factorization, graph Mediu
Deterministic Time homomorphism m
NP NP hard + Polynomial Traveling salesman. Graph Hard
Complet Non Time coloring
e Deterministic
NP Hard Non Non K means clustering, traveling Hardes
Deterministic Polynomial salesman, graph coloring t
Time
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
NP
P NP-Hard
NP-Complete
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
1. Approximation Instead of Exact Solutions:
1. Most numerical analy si s probl ems
cannot be solved exactly.
2. They require approximate solutions
because they involve continuous
mathematics, where infinite precision
is not feasible.
Challenges of
3. For example, computing the value of
Numerical Algorithms 𝑒 𝑥 or evaluating definite integrals.
2. Truncation Errors:
1. W h e n w e a p p r o x i m a t e i n f i n i t e
processes with finite ones, errors occur
due to truncation.
2. For instance, using Taylor polynomials
to approximate functions or numerical
integration methods like the
trapezoidal rule.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
3. Round-off Errors:
1. Computers represent real numbers Instability and Ill-conditioning:
with finite precision due to limited 1. Instabilit y occurs when r ou nd-off
storage. errors propagate and amplify
throughout the computation, leading
2. R o u n d - o f f e r r o r s a r i s e f r o m t h e to inaccurate results.
discrepancy between the true value of
a number and its representation in the 2. Ill-conditioned problems are highly
computer's memory, particularly in sensitive to small changes in input,
floating-point arithmetic. making it challenging to design stable
algorithms that produce reliable
3. These errors occur during arithmetic solutions.
operations and can lead to
inaccuracies in computations,
especially for very large or very small Overall, the challenges in numerical
numbers. algorithms stem from the need to balance
approximation accuracy with computational
4. Overflow and underflow are specific efficiency while mitigating the effects of
issues related to round-off errors, truncation and round-off errors to obtain
occurring when numbers exceed the reliable solutions to mathematical problems.
range of representable values or
become too small to be accurately
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
represented, respectively.
Coping with Limitations of Algorithm Power
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Backtracking ● N-Queen Problem
● Hamiltonian Cycle
● Sum of Subset
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● Initialization: Begin with an initial state
before searching for a solution. This state
represents the starting point of the
● Generate Child Nodes: If the current
problem. node is promising, generate its child by
adding the first remaining legitimate
● State-Space Tree: Construct a tree of option for the next component of the
choices being made, called the state- solution. Move the processing to this child
space tree. Each node in the tree node.
represents a partially constructed
solution. ● Backtracking: If the current node turns
out to be non-promising, backtrack to
● Promising Nodes: Nodes in the state- the node's parent to consider alternative
options for its last component. If no
space tree are classified as promising or
alternatives exist, backtrack one level up
non-promising. A promising node
the tree, and so on.
indicates that the partial solution it
represents could potentially lead to a ● Stop Condition: If the algorithm reaches
complete solution. a complete solution to the problem, it
can either stop (if only one solution is
● Depth-First Search (DFS): Traverse the re q ui red ) o r c on ti nu e s e a r ch i ng f o r
state-space tree using depth-first search. additional solutions.
This means exploring as far as possible
along
Prepared each branchAsst.Prof,
by M.V.Bhuvaneswari, before CSE
backtracking.
(AI&ML,DS) Dept
N-Queens problem
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Hamiltonian Cycle
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Sum of Subsets
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● The Assignment Problem involves
assigning tasks to agents in such a
way that the total cost or time is
m inimi zed, wi th e a ch t as k be in g
assigned to exactly one agent and
Branch & Bound each agent handling exactly one task.
● The Branch and Bound method is an
efficient algorithmic approach to
Assignment Problem solve this problem.
● Here are the steps involved in solving
the Assignment Problem using the
Branch and Bound method:
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
1. Initialization: 3. Calculate Reduced Cost Matrix:
1. Define the cost matrix 𝐶 C of size 𝑛 1. For the initial node, reduce the cost
×𝑛 , where 𝐶 [𝑖 ][𝑗 ] represents the matrix by subtracting the smallest
cost of assigning task 𝑖 to agent 𝑗 . element in each row from all elements
in that row, and then subtracting the
2. Initialize the lower bound (LB) for smallest element in each column from
the root node, which is typically set all elements in that column.
to 0.
2. The sum of all subtracted elements
3. Initialize the solution path and the gives the lower bound for the node.
minimum cost found so far.
4. Branching:
2. Node Representation: 1. S e l e c t t h e m o s t p r o m i s i n g n o d e
1. E a c h n o d e i n t h e s e a r c h t r e e (usually the node with the smallest
represents a partial assignment of lower bound) to expand.
tasks to agents.
2. For the selected node, create child
2. A n o d e i s r e p r e s e n t e d b y t h e nodes by assigning the next task to
curre nt partial assignment, the each possible agent not yet assigned.
current cost, and the reduced cost
matrix. 3. Update the cost matrix by fixing the
assignment and reducing the
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept remaining matrix.
5. Bounding:
1. For each child node, compute a new
lower bound by reducing the cost
7. Repeat:
matrix again.
1. Continue the process of selecting the
most promising node, branching, and
2. I f t h e l owe r b o un d o f a n o d e i s
bounding until all nodes are either
greater tha n t he minimum cost
pruned or a complete assignment is
found so far, prune the node (i.e., do
found.
not consider it further).
8. Terminate:
6. Update Solution:
1. The algorithm terminates when all
1. If a complete assignment is found
nodes have been processed or pruned.
(i.e., all tasks are assigned to agents),
check if its total cost is less than the
2. The current best assignment and its
current minimum cost.
cost are the optimal solution to the
Assignment Problem.
2. If it is, update the minimum cost
and the best assignment found so
far.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
1 2 3 4 Lb = 2+3+1+2 =8
A 9 2 7 8
B 6 4 3 7
A=1 A=2 A=3 A=4
C 5 8 1 8
D 7 6 2 4
B=1 B=3 B=4
A=2 B=1 C=3 D=4 A=2 B=1 C=4 D=3
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
• Given n items of known weights wi
and values vi, i = 1, 2, . . . , n, and a
knapsack of capacity W, find the most
valuable subset of the items that fit in
the knapsack.
• It is convenient to order the items of a
Knapsack Problem given instance in descending order by
their value-to-weight ratios.
• Then the fir st it em gi ves the be st
payoff per weight unit and the last
one gives the worst payoff per weight
unit, with ties resolved arbitrarily:
v1/w1 ≥ v2/w2 ≥ ... ≥ vn/wn — (1)
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Each node on the ith level of this tree, 0 ≤ i ≤
n, represents all the subsets of n items that
include a particular selection made from the
first i-ordered items.
Example: Knapsack Capacity W= 10
• T hi s p a r t i c u l a r s e le c t i o n i s u ni q u e l y
determined by the path from the root to Ite weigh valu Value/wei
the node. m t e ght
1 4 $40 10
• A branch going to the left indicates the
inclusion of the next item, and a branch
2 7 $42 6
going to the right indicates its exclusion.
3 5 $25 5
• A si mpl e wa y to com pute t he u ppe r
bound (ub) is to add to v, the total value 4 3 $12 4
of the items already selected, the product
of the remaining capacity of the knapsack
W − w, and the best per-unit payoff
among the remaining items, which is
vi+1/wi+1:
ub = v + (W − w)(vi+1/wi+1) — (2)
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
1. Node 1, the left child of the root,
represents the subsets that include item
• At the root of the state-space tree 1.
no items have been selected as yet.
2. The total weight and value of the items
already included are 4 and $40,
• Hence, both the total weight of the respecti vel y; the valu e of t he u pper
items already selected w, and their bound is 40 + (10 − 4) ∗ 6 = $76.
total value v are equal to 0. 3. Node 2 represents the subsets that do
not include item 1.
• The value of the upper bound
computed by formula (2) is $100.
4. Accordingly, w = 0, v = $0, and ub =0+
(10 − 0) ∗ 6 = $60. Since node 1 has a
l ar ge r up per bou nd t h a n th e u pp er
• The above picture displays bound of node 2, it is more promising
for this maximization problem, and we
the State-space tree of the best-
branch from node 1 first.
first branch-and-bound
algorithm for the instance of the 5. Its children – nodes 3 and 4, represent
knapsack problem. subsets with item 1 and with and
without item 2, respectively.
6. Since the total weight w of every subset
represented by node 3 exceeds the
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept knap sac k’ s c a pa ci t y , no de 3 ca n be
terminated immediately.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Traveling Salesman Problem (TSP)
Problem Statement: Given a set of
cities and the distances between each
pair of cities, find the shortest possible
route that visits each city exactly once
Traveling Salesman Problem and returns to the origin city.
Branch and Bound Method
The Branch and Bound method is an
optimization algorithm that
systematically explores and prunes the
search tree to find the optimal
solution for the TSP.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Steps Involved 3. Calculate Reduced Cost Matrix:
1. Initialization: • For the initial node, reduce the distance
1. Define th e di st ance ma tr i x DDD ma t ri x by su b t r ac ti ng t h e s m a ll e s t
where D[i][j]D[i][j]D[i][j] represents element in each row from all elements
the distance between city iii and city in that row, and then subtracting the
jjj. smallest element in each column from all
2. Initialize a priority queue to manage elements in that column.
the nodes of the search tree, starting • The sum of all subtracted elements gives
with an initial node representing the the lower bound for the root node.
start city.
4. Branching:
2. Node Representation: • Select the most promising node (the
1. E a c h n o d e i n t h e s e a r c h t r e e node wi th t he lo we s t L B) fr om t h e
represents a partial tour. priority queue.
2. The node includes: • Generate child nodes by extending the
1. The current partial tour. current partial tour to each unvisited
2. The current cost of the partial city.
tour. • For each child node, update the partial
3. T h e l e v e l ( n u m b e r o f c i t i e s tour, increase the level, and update the
visited so far). reduced distance matrix.
4. The reduced distance matrix. • Total Cost = Parent matrix cost +
5. The lower bound (LB) of t he
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept Reduction + Path
node.
5. Calculate the Lower Bound for Each Node:
• For each child node, calculate a new
lower bound by further reducing the 8. Update Solution:
distance matrix. • If a node represents a complete tour
• Add the reduced costs to the current (visiting all cities and returning to the
cost of the tour to get the lower bound. origin city), compare its cost to the
current minimum tour cost.
6. Bounding: • If it is lower, update the minimum tour
• If the lower bound of a node is greater cost and the best tour.
than or equal to the current minimum
tour cost found, prune the node. 9. Terminate:
• Otherwise, add the node to the priority • The algorithm terminates when the
queue. priority queue is empty or a complete
tour with the minimum cost is found.
7. Repeat: • The current best tour and its cost are
• Select the next most promising node the optimal solution.
from the priority queue and repeat the
branching and bounding steps.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Approximation Algorithms as a Solution
Approximating Solutions:
• Fast Algorithms: Instead of exact
solutions, we use fast algorithms to
get approximate solutions.
• Good Eno u g h S o l u t i on s : In m a ny
practical applications, an
approximate solution is sufficient.
Approximation Algorithms Heuristics:
• Definition: A heuristic is a common-
for NP-Hard Problems sense rule or strategy derived from
experience.
Given an optimization problem P,
Algorithm A is said to be an
approximation algorithm for P, if for any
given instance I, it returns an
approximate solution, that is a feasible
solution.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● Guaranteed to run in a polynomial time
● Guaranteed to get a solution that is
close to the optimal solution. (near
optimal)
● Approximation Algorithms: Provide
practical solutions to NP-Hard
Accuracy Ratio r(sa) problems by trading off exactness for
• Sa – approximate solution efficiency.
• f(sa ) – Value of objective function for
the solution given by approximation ● Heuristics and Performance: Use
algorithm.
problem-specific heuristics to develop
• f(s*) – Exact (Optimal) solution of the
fa st algor i th ms a nd m eas u re t hei r
problem
performance using accuracy ratios.
r(s a ) = f(s a )/f(s*) – for minimizing the
objective function ● Real-Life Application: These algorithms
r(s a ) = f(s*)/f(s a ) – for maximizing the ar e pa rt i cu la rly u sefu l wh en ex ac t
objective function solutions are impractical, and
approximate solutions are good enough.
Desirable Properties:
• The closer r(sa) is to 1, the better the
approximation.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Approximation Algorithm
Polynomial-Time Approximation Algorithms
Twice-
Definition: around the
tree
• An algorithm is a c-approximation Minimum algorithm
algorithm if the accuracy ratio r(sa) Greedy Spanning tree
does not exceed c for any instance. Approach – based
algorithms
• The performance ratio R A is the Christofide
smallest c for which the above s
condition holds. algorithm
Multifragme
Nearest
Performance Ratio: nt- heuristic
Neighbor Algorithm
• It indicates the quality of the
approximation algorithm. The goal is
to have RA as close to 1 as possible.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● Select a random city
Approximation ● Find the nearest unvisited city and
go there
Algorithms for TSP ● Are there any unvisited cities left? If
yes, repeat step 2
● Return to the first city
Greedy Approach: Advantages:
• Simplicity: Easy to understand and
1. Nearest Neighbour Algorithm implement.
• Speed: Runs in O(n2) time, where n
is the number of cities.
Limitations:
• Suboptimal: May not find the
shortest possible tour.
• Greedy Nature: Locally optimal
choices may lead to a globally
suboptimal solution
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
1
A B
1 6
A B 2
3
3 D C
6 2 1
D C ● Sa : Approximation solution
1 A-B-C-D-A = 1+2+1+6
= length
10
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
1
A B
3
1 3
A B
3
3 D C
6 2 1
● S* : Optimal solution
D C A-B-D-C-A = 1+3+1+3
1
= length 8
r(sa) = f(sa)/f(s*) = 10/8 = 1.25
Tour sa is 25% Longer than optimal
tour s*
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Traveling Salesman Problem (TSP)
Problem Statement:
Given a set of c iti es and the di st ance s
between each pair of cities, the objective is
to find the shortest possible route that visits
each city exactly once and returns to the
2. Multifragment Heuristic Algorithm starting city.
Multifragment Heuristic Approximation
Algorithm
The Multifragment Heuristic is an algorithm
that builds the tour by connecting edges
(fragments) in a way that avoids forming
cycles prematurely until the end when all
cities are connected into a single tour.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Initialization:
• Start with a set of all edges, each
Steps Involved in Multifragment Heuristic
edge representing the distance
between two cities.
• Sort all edges in increasing order
of their weights (distances).
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Building the Tour:
• Initialize an empty set for the tour
edges.
• Add edges one by one to the tour,
following the sorted order, with the
following constraints: Completion:
• The process continues until all cities
• Avoid adding an edge that would are connected in a single tour that
create a cycle (except when it visit s eac h ci t y ex ac t ly onc e a nd
completes the tour). returns to the starting city.
• Avoid adding an edge that would
increase the degree of any vertex
(city) to more than 2.
• Continue adding edges until all cities
are included in the tour.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
1 Node Edge
A B Weight
A-B 1
3
3 C-D 1
6 2
B-C 2
B-D 3
D C
1 C-A 3
D-A 6
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
1
A B
1 6
A B 2
3
3 D C
6 2 1
D C ● Sa : Approximation solution
1 A-B-C-D-A = 1+2+1+6
= length
10
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
1
A B
3
1 3
A B
3
3 D C
6 2 1
● S* : Optimal solution
D C A-B-D-C-A = 1+3+1+3
1
= length 8
r(sa) = f(sa)/f(s*) = 10/8 = 1.25
Tour sa is 25% Longer than optimal
tour s*
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Advantages:
• Simple Implementation: Relative ly
straightforward to implement.
• Efficiency: Works in O(n2log n) time
due to sorting and processing of edges. Completion:
• The process continues until all cities
Limitations: are connected in a single tour that
• Suboptimal: While it often finds good visit s eac h ci t y ex ac t ly onc e a nd
solutions, it may not always yield the returns to the starting city.
optimal solution.
• Potential for Poor Performance: In
some cases, the quality of the solution
can vary significantly from the optimal.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Step 1: Construct a minimum
spanning tree.
Step 2: Let the root be an arbitrary
vertex.
Minimum Spanning tree-based
algorithms: Step 3: Traverse all the vertices by
DFS and record the sequences of
1. Twice around the tree algorithm vertices (both visited and unvisited)
Step 4: Use a shortcut strategy to
generate a feasible tour.
Also remember cycles should not be
formed
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Step 1: Construct a minimum spanning tree
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Step 3: Traverse all the vertices by DFS
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Step 4: Use the shortcut strategy to
generate a feasible tour
• Record the visited nodes: a-b-c-b-
d-e-d-b-a (remove the duplicates)
• a-b-c-d-e-a = length 39
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Step 1: Construct a minimum
spanning tree.
Minimum Spanning tree-based Step 2: Find odd-degree vertices.
algorithms:
Step 3: Minimum weight matching
2. Christofides algorithm
Step 4: Find the Euler cycle path.
Step 5: Find TSP cycle path
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● MST has four odd-degree
vertices a,b,c, and e.
● a degree – 1
● b degree – 3
● c degree – 1
● d degree – 2
● e degree – 1
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● Find the Euler and TSP cycle path
● a-b-c-e-d-a = length 37
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Approximation Algorithm
Approximation Algorithms Greedy Approximati
Approach on schemes
for Knapsack problem
Discrete Continuous
Knapsack Knapsack
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Step 1: Compute the value-to-weight
ratios r i = v i /w i , i = 1,2,….,n for the
items given.
Step 2: Sort the items in non-decreasing
order of the ratios computed in Step 1
Greedy Algorithm for discrete
knapsack problem Step 3:
• Repeat the following operation until
no item is left in the sorted list
• if the current item on the list fits
into the knapsack place it and
proceed to the next item.
• Otherwise just proceed to the next
item.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Example: Knapsack Capacity W= 10
Step 1 : Find V-W ratio
Step 2 : Arrange in non-decreasing order
Ite weigh valu Value/wei Ite weigh valu Value/wei
m t e ght m t e ght
1 7 $42 6 3 4 $40 10
2 3 $12 4 1 7 $42 6
3 4 $40 10 4 5 $25 5
4 5 $25 5 2 3 $12 4
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Example: Knapsack Capacity W= 10
Step 3 : After sorting
Ite weigh valu Value/wei
m t e ght
3 4 $40 10
1 7 $42 6
4 5 $25 5
2 3 $12 4
Total weight = 9/10
Profit earned = $40+$25 =$65
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Step 1: Compute the value-to-weight
ratios r i = v i /w i , i = 1,2,….,n for the
items given.
Step 2: Sort the items in non-decreasing
order of the ratios computed in Step 1
Greedy Algorithm for continuous
Step 3:
knapsack problem • Repeat the following operation until
no item is left in the sorted list
• if the current item on the list fits
into the knapsack place it and
proceed to the next item.
• Otherwise take the largest fraction
to fill the knapsack to its full
capacity and stop.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Example: Knapsack Capacity W= 10
Step 1 : Find V-W ratio
Step 2 : Arrange in non-decreasing order
Ite weigh valu Value/wei Ite weigh valu Value/wei
m t e ght m t e ght
1 7 $42 6 3 4 $40 10
2 3 $12 4 1 7 $42 6
3 4 $40 10 4 5 $25 5
4 5 $25 5 2 3 $12 4
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Example: Knapsack Capacity W= 10
Step 3 : After sorting
Ite weigh valu Value/wei
m t e ght
3 4 $40 10
1 7 $42 6
4 5 $25 5
2 3 $12 4
Total weight = 10
Profit earned = $40+$36 =$76
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● This scheme is suggested by Prof. Shani
● This algorithm generates subsets of K
items or less and for each one that fits
Approximation scheme into the knapsack it adds the remaining
items as the greedy algorithm would do
for knapsack problem (i.e. in non-decreasing order of their
value-to-weight ratios.)
● The subset of the highest value obtained
in this fashion is returned as the
algorithm output.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
w subset Possible added Profit
items
Example: Knapsack Capacity W= 10 K=2
4+5+ {0} 1,3,4 $69
1
Ite weigh valu Value/wei 4+5+ {1} 3,4 $69
m t e ght 1
1 4 $40 10 7+1 {2} 4 $46
2 7 $42 6 5+4+ {3} 1,4 $69
1
3 5 $25 5 1+4+ {4} 1,3 $69
5
4 1 $4 4
11>W {1,2} Not feasible
9+1 {1,3} 4 $69
Optimal solution is {1,3,4}
W=10, Profit = $69 5+5 {1,4} 3 $69
12>W {2,3} Not feasible
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
8 {2,4} $46
6 + 4 {3,4} 1 $69
Approximation Algorithm
Approximation Algorithms
for Nonlinear Equations Bisection Method of Newton’s
Method False Position Method
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● A bracketing method that repeatedly
divides an interval in half and selects
the subinterval that contains the root.
● It is simple and robust but can be slow
to converge.
● The bisection method is a numerical
technique used to find the roots of a
Bisection Method nonlinear equation f(x)=0.
● It is a type of bracketing method that
repeatedly narrows down an interval
that contains the root.
● The method assumes that the function f
is continuous on the interval [a,b] and
that f(a) and f(b) have opposite signs
(i.e., the Intermediate Value Theorem
guarantees that there is at least one
root in [a,b].
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
4. Interval Update:
Steps of the Bisection Method: • If f(c)=0, then c is the root.
1. Initial Interval: Start with an interval • If f(a) ⋅ f(c)<0, then the root lies in
[a,b] where f(a) and f(b) have the interval [a,c]. Set b=c.
opposite signs.
• If f(b) ⋅ f(c)<0, then the root lies in
2. Midpoint Calculation: Compute the the interval [c,b]. Set a=c.
midpoint of the interval: c =
𝑎+𝑏
2
5. Convergence Check: Repeat steps 2-4
3. Function Evaluation: Evaluate the until the interval [a,b] is sufficiently small,
function at the midpoint, f(c). or until ∣ f(c) ∣ is below a predefined
tolerance level.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Example: Solve the equation x 3 -x-1
using the bisection method correct to
the three decimals
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● T h e m et h o d o f f a l s e p o s i t i o n , a l s o
known as the regula falsi method, is an
iterative root-finding algorithm used to
Method of False Position solve equations of the form f(x)=0.
(Regula Falsi) ● It combines elements of the bisection
method and the secant method, taking
advantage of the fact that the function
changes signs over an interval where a
root exists.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Example: Solve the equation xlogx -
1.2 = 0 using the regula falsi method
correct to the four decimals
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● The Newton-Raphson method uses the
first derivative of the function to
iteratively find better approximations to
Newton- Raphson Method the root.
● Given a function f(x) and its derivative
f′(x), the method starts with an initial
guess x 0 and generates a sequence of
approximations that ideally converge to
a root.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Example: Solve the equation x 3-3x-5
using the Newton’s method .
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept