BITS, PILANI – K. K.
BIRLA GOA CAMPUS
Design & Analysis of Algorithms
(CS F364)
Lecture No. 29
1
TSP with triangle inequality
Approx-TSP Tour
Input: A complete graph G (V, E)
Output: A Hamiltonian cycle
1. Select a “root” vertex r ∈V [G].
2. Use MST-Prim (G, c, r) to compute a minimum
spanning tree from r.
3. Let L to be the sequence of vertices visited in a
preorder tree walk of T.
4. Return the Hamiltonian cycle H that visits the vertices in
the order L.
TSP with triangle inequality
Theorem:
APPROX-TSP-TOUR is a polynomial-time 2-approximation
algorithm for the TSP problem with triangle inequality.
Theorem: There exists a 1.5-approximation algorithm
for TSP with triangle inequality. Christofides Algorithm
Theorem:
NNA is an O(log n)-approximation algorithm
Exercise:
NNA is not an O(1)-approximation algorithm.
General TSP (without triangle inequality)
Suppose we drop the assumption that the cost function c
satisfies the triangle inequality.
Then good approximate tours cannot be found in
polynomial time unless P = NP.
Theorem:
If P ≠ NP, then for any constant ρ ≥ 1, there is no
polynomial-time approximation algorithm with
approximation ratio ρ for the general traveling-salesman
problem.
General TSP (without triangle inequality)
Proof:
The proof is by contradiction.
Suppose to the contrary that for some number ρ ≥ 1,
there is a polynomial-time ρ -approximation algorithm A.
Without loss of generality, we assume that ρ is an integer,
by rounding it up if necessary.
We shall show how to use A to solve instances of the
hamiltonian-cycle problem in polynomial time.
Since the hamiltonian-cycle problem is NP-complete,
solving it in polynomial time implies that P = NP.
General TSP (without triangle inequality)
Let G = (V, E) be an instance of the hamiltonian-cycle
problem.
We wish to determine efficiently whether G contains a
hamiltonian cycle by making use of the hypothesized
approximation algorithm A.
We turn G into an instance of the traveling-salesman
problem as follows.
Let G' = (V, E') be the complete graph on V; that is,
E' = {(u, v): u, v V and u ≠ v}.
Assign an integer cost to each edge in E' as follows:
General TSP (without triangle inequality)
We can construct G' and c from a representation of G in
time polynomial in |V| and |E|.
Now, consider the traveling-salesman problem (G', c).
Observe:
• If the original graph G has a hamiltonian cycle H, then the
cost function c assigns to each edge of H a cost of 1, and
so (G', c) contains a tour of cost |V|.
• On the other hand, if G does not contain a hamiltonian cycle,
then any tour of G' must use some edge not in E. But any
tour that uses an edge not in E has a cost of at least
General TSP (without triangle inequality)
Now suppose we apply the approximation algorithm A
to the traveling-salesman problem (G', c)
• Because A is guaranteed to return a tour of cost no
more than ρ times the cost of an optimal tour, if G
contains a hamiltonian cycle, then A must return it.
• If G has no hamiltonian cycle, then A returns a tour
of cost more than ρ|V|.
Therefore, we can use A to solve the hamiltonian-cycle
problem in polynomial time.
0-1 Knapsack
Approximation Algorithm
Let’s try the basic greedy algorithm:
Greedy Algorithm
𝒗𝒊
1. Sort items in non-increasing order of
𝒘𝒊
2. Greedily pick items in above order
0-1 Knapsack
Unfortunately, this algorithm is arbitrarily bad.
Consider the following input as a counterexample
• An item with size 1 and profit 2
• An item with size W and profit W
Our greedy algorithm will only pick the small item,
making this a pretty bad approximation algorithm.
0-1 Knapsack
We make the following small adjustment to our greedy
algorithm
Greedy Algorithm Modified
𝒗𝒊
1. Sort items in non-increasing order of
𝒘𝒊
2. Greedily add items until we hit an item (say kth
item) that is too big, that is σ𝑘𝑖=1 𝑤𝑖 ≥ W
3. Pick the better of {1, 2,….,k-1} and {k}
0-1 Knapsack
Theorem:
Greedy Algorithm Modified is a 2-approximation algorithm
for the 0-1 knapsack problem
Proof:
Let x* be an optimum solution for the 0-1 Knapsack instance.
Since every solution that is feasible for the 0-1 Knapsack instance is also
feasible for the respective Fractional Knapsack instance we have that
val (x*) ≤ val (z*)
where z* is the optimum solution for Fractional Knapsack.
0-1 Knapsack
Note that z* = (1, . . . ,1, α, 0, . . . ,0), where α ∈ [0,1) is
at the break item k.
The solutions x and y are
x = (1, . . . ,1,0,0, . . . ,0) and y = (0, . . . ,0,1,0, . . . ,0)
We have
val (x*) ≤ val (z*) = val(x) +αvk ≤ val(x) + val(y)
≤ 2 max {val(x), val(y)}
which implies the approximation ratio of 2