m5 Aad
m5 Aad
▪ When the complexity is expressed as some polynomial function over input size, then the
concerned problem is tractable.
▪ Tractable problem solutions are implemented in practice. They have polynomial time
complexity.
▪ Example of tractable problem
• PATH problem: Given directed graph G, determine whether a directed path exists
from vertex s to vertex t.
o Time complexity = O(n)
Where n – total number of vertices
▪ When the complexity is expressed as some exponential function over input size, then the
concerned problem is intractable.
▪ An intractable problem has a faster complexity growth as compared to tractable problems.
▪ Example of intractable problem
• Knapsack Problem
o Time Complexity = O(2n)
• Traveling Salesman Problem
o Time Complexity = O(n2 2n)
▪ According to Cook-Krap thesis, a problem that is in P is called tractable and that is not in
P is called intractable.
o Class P
▪ Class P consists of those problems that are solvable in polynomial time.
▪ P problems can be solved in time O(nk). Here n is the size of input and k is some constant.
▪ Example:
• PATH Problem: Given directed graph G, determine whether a directed path exists
from s to t.
o Algorithm
▪ Inputs: <G,s,t> [G – directed graph, s,t – 2 nodes]
1. Place a mark on node s and enqueue it into an empty queue.
2. Repeat step 3 until the queue is empty
3. Dequeue the front element a. Mark all unvisited neighbors of a and enqueue
those into the queue.
4. If t is marked, then accept. Otherwise reject. // ie; if t is marked, then there is path from s to t
o Complexity Calculation
▪ Step 1 & 4 will execute exactly once.
▪ Step 3 & 4 will execute atmost n times, where n is the number of nodes in G.
▪ Time complexity = O(n).
▪ This is a polynomial time algorithm.
▪ Other Examples:
• Single Source Shortest Path problem using Dijkstra‟s Greedy method.
• Multistage Graph problem implemented using forward or backward dynamic
programming.
• Minimum cost spanning tree using Prim‟s or Kruskal‟s method.
• Network flow problem using Ford-Fulkerson algorithm.
o Class NP
▪ Some problems can be solved in exponential or factorial time. Suppose these problems
have no polynomial time solution. We can verify these problems in polynomial time.
These are called NP problems.
▪ NP is a class of problem that having only non-polynomial time algorithm and a
polynomial time verifier.
▪ Example:
• Hamiltonian path(HAMPATH) Problem
o A Hamiltonian path in a directed graph G is a directed path that goes through each
node exactly once.
2 CS KTU Lectures
o The Hamiltonian path of the above graph is as follows s-2-4-3-1-5-6-t
▪ Algorithm
1. Check whether s= P1 and t= Pm. if either fails, reject
2. Check for the repetition of the nodes in the list P. If any are found,
reject.
3. For each i, check whether (Pi, Pi+1) is an edge in G. Here i is varies from
1 to m-1. If any are not, reject. //adjacent nodes check edges exist or not
4. If all test have been passed, then accept it.
▪ This algorithm runs in polynomial time. Therefore HAMPATH problem is a
NP problem.
• CLIQUE Problem
o A clique in an undirected graph is a sub-graph where every two nodes are
connected by an edge.
o A k-clique is a clique that contains k nodes.
o Example:
3 CS KTU Lectures
5 clique (5 nodes)
This is a subgraph, there exists edges
between every node.
▪ Whether P=NP? is one of the greatest unsolvable problem in theoretical computer science
o Class NP- Hard Decision problem - output will be either yes or no.
▪ Polynomial Time Reductions
• Consider a decision problem A, which is like to solve in polynomial time.
• Consider another decision problem B, which having a polynomial time algorithm.
• Suppose that we have a procedure that transforms any instance α of A into some
instance β of B with the following characteristics. input
1. The transformation takes polynomial time
2. The answers are the same. That is, the answer for α is “yes” iff the answer for β is
also “yes”.
• Such a procedure is called polynomial time reduction.
4 CS KTU Lectures
▪ NP- Hard
• If a decision problem X is NP-Hard if every problem in NP is polynomial time
reducible to X.
_
<p - polynomial time reducable Y ≤p X for every Y in NP
• It means that X is as hard as all problems in NP.
• If X can be solved in polynomial time, then all problems in NP can also solved in
polynomial time.
o Class NP-Complete
▪ If the problem is NP as well as NP-Hard, then that problem is NP Complete.
▪ Example:
• CIRCUIT-SAT problem: Given a Boolean circuit C, is there an assignment to the
variables that causes the circuits to output 1?
• SAT(Satisfiability) problem: Given a Boolean expression ɸ, is there an assignment to
the variables that causes the expression to output 1?
• 3-CNF-SAT
o Literal: The variables and its negation in a Boolean formula
o Clause: ORorof one or more literals
▪ Ex: (x1 V ˥x2 V x3)
negation x2
5 CS KTU Lectures
▪ CLIQUE problem is NP Complete: Proof
PROVE NP • Step 1: Write a polynomial time verification algorithm to prove that the given
problem is NP
o Algorithm: Let G= (V,E), we use the set V‟ ⊆ V of k vertices in the clique as a
certificate of G
1. Test whether V‟ is a set of k vertices in the graph G
2. Check whether for each pair (u,v) ∈ V‟, the edge (u,v) belongs to E.
3. If both steps pass, then accept. Otherwise reject.
o This algorithm will execute in polynomial time. Therefore CLIQUE problem is a
NP problem.
PROVE NP HARD • Step 2: Write a polynomial time reduction algorithm from 3-CNF-SAT problem to
CLIQUE problem(3-CNF-SAT ≤p CLIQUE)
o Algorithm
▪ Let Ф = C1 ˄ C2 ................. ˄ Ck be a Boolean formula in 3CNF with k clauses
▪ Each clause Cr has exactly three distinct literals lr1, lr2, lr3. LITERAL X1 X2 X3
▪ Construct a graph G such that Ф is satisfiable iff G has a click of size k.
▪ The graph G is constructed as follows
• For each clause Cr = (lr1 V lr2 V lr3) in Ф, we place a triple of vertices
(1)
Vr1, Vr2 and Vr3 in to V. C1 C2 C3 - VERTEX SET
C1 C2 C3
• Example: Ф = (x1 V ךx2 V ךx3) ˄ (ךx1 V x2 V x3) ˄ (x1 V x2 V x3)
o The graph G equivalent to Ф is as follows
C1
C2 C3
(3) No edges between its own negation eg: (x1 - negation x1 - >no edge)
6 CS KTU Lectures
o If G has a clique of size k, then Ф has a satisfying assignment. Here k=3.
PROVE NP o Step 1:Write a polynomial time verification algorithm to prove that the given
problem is NP
▪ Inputs: <G, k,V‟> k - size of vertex cover
▪ Verifier Algorithm:
1. count = 0
2. for each vertex v in V‟ remove all edges adjacent to v from set E
a. increment count by 1
3. if count = k and E is empty then the given solution is correct
4. else the given solution is wrong
▪ This algorithm will execute in polynomial time. Therefore VERTEX
COVER problem is a NP problem.
PROVE NP HARD o Step 2: Write a polynomial time reduction algorithm from CLIQUE problem to
VERTEX COVER problem
▪ Algorithm
Inputs: <G=(V,E), k>
1. Construct a graph G‟, which is the complement of Graph G
2. If G‟ has a vertex cover of size |V| - k, then G has a clique of size k.
7 CS KTU Lectures
Vertex cover means, all edges in a graph is
covered with minimum number of vertices.
Vertex 1 and 2 is vertex cover here.
1 is covering edges to 5,6,3,7
▪ Example: 2 is covering edges to 4,5,6.
clique k = 5
|V|-k= 7-5 =2
k=5
8 CS KTU Lectures
HAM-CYCLE
PROVE NP HARD o Step 2: Write a polynomial time reduction algorithm from VERTEX-COVER
problem to TSP problem(VERTEX-COVER ≤p TSP)
HAM-CYCLE
1. Let G=(V,E) be an instance of HAM-CYCLE
2. We can construct an instance of TSP as follows
• Construct a complete graph G‟=(V,E‟)
• Define the cost function c for G‟
o c(i,j) = 0 if (i,j) is an edge in E
o c(i,j) = 1 if (i,j) is not an edge in E
▪ The instance of TSP is then (G‟,c,0).
▪ The graph G has a hamiltonian cycle iff graph G‟ has a tour of cost atmost 0.
o Examples
1. Consider the following algorithm to determine whether or not an undirected graph has a
clique of size k. First, generate all subsets of the vertices containing exactly k vertices.
Next, check whether any of the sub-graphs induced by these subsets is complete (i.e.
forms a clique). Why is this not a polynomial-time algorithm for the clique problem,
thereby implying that P = NP?
• Approximation Algorithm
o Approximate Solution: A feasible solution with value close to the value of optimal solution
is called an approximate solution
o Approximation Algorithms: An algorithm that returns near optimal solution is called
Approximation Algorithm.
o Approximation algorithms have two main properties:
▪ They run in polynomial time
▪ They produce solutions close to the optimal solutions
o Approximation algorithms are useful to give approximate solutions to NP complete
optimization problems.
o It is also useful to give fast approximations to problems that run in polynomial time.
o Approximation Ratio / Approximation Factor
▪ For given problem, C is the result obtained by the algorithm and C* is the optimal result.
▪ The approximation ratio of an algorithm is the ratio between the result obtained by the
algorithm and the optimal result.
▪ For maximization problem, 0 < C ≤ C*, Approximation Ratio = C*/C
▪ For minimization problem, 0 < C* ≤ C, Approximation Ratio = C/C*
▪ The approximation ratio of an approximation algorithm is never less than 1.
▪ Approximation ratio and computational time are inversely proportional.
▪ Approximation ratio and quality of the result are also inversely proportional.
o k-Approximation Algorithm: An algorithm with approximation ratio k is called a k-
approximation algorithm.
▪ 1-approximation algorithm produces an optimal solution
9 CS KTU Lectures
▪ An approximation algorithm with a large approximation ratio may return a solution
that is much worse than optimal.
o Different Types of Approximation Algorithms
▪ Absolute Approximation Algorithm : An algorithm is Absolute Approximation
Algorithm iff |C*-C| ≤ k, for some constant k
▪ f(n)-Approximation Algorithm: An algorithm is f(n)-Approximation Algorithm iff |C*-
C|/|C*| ≤ f(n), for C* > 0
▪ є- Approximation Algorithm: An є-Approximation Algorithm is an f(n)-Approximation
Algorithm for which f(n) ≤ є for some constant є
▪ Applications
• Loading of containers like trucks.
• Placing data on multiple disks.
• Job scheduling.
• Packing advertisements in fixed length radio/TV station breaks.
• Storing a large collection of music onto tapes/CD‟s, etc.
10 CS KTU Lectures
▪ Next Fit Algorithm
• If the current item is fit in the same bin as the last item, then insert it in the same bin.
• Otherwise use the new bin
• Time Complexity
o Best case Time Complexity = θ(n)
o Average case Time Complexity = θ(n)
o Worst case Time Complexity = θ(n)
▪ First Fit Algorithm
• Scan the previous bins in order and find the first bin that it fits.
• If such bin exists, place the item in that bin
• Otherwise use a new bin.
• Time Complexity
o Best case Time Complexity = θ(n log n)
o Average case Time Complexity = θ(n2)
o Worst case Time Complexity = θ(n2)
▪ Best Fit Algorithm
• Scan the previous bins and find a bin that having minimum remaining capacity that
can accommodates this item.
• If such bin exists, place the item in that bin
• Otherwise use a new bin
• Time Complexity
o Best case Time Complexity = θ(n log n)
o Average case Time Complexity = θ(n2)
o Worst case Time Complexity = θ(n2)
▪ Worst Fit Algorithm
• Scan the previous bins and find a bin that having maximum remaining capacity that
can accommodates this item.
• If such bin exists, place the item in that bin
• Otherwise use a new bin
• Time Complexity
o Best case Time Complexity = θ(n log n)
o Average case Time Complexity = θ(n2)
o Worst case Time Complexity = θ(n2)
▪ First Fit Decreasing Algorithm
• Sort the items in the descending order of their size
• Apply First fit algorithm
• Time Complexity
o Best case Time Complexity = θ(n log n)
o Average case Time Complexity = θ(n2)
o Worst case Time Complexity = θ(n2)
▪ Best Fit Decreasing Algorithm
• Sort the items in the descending order of their size
• Apply Best fit algorithm
• Time Complexity
o Best case Time Complexity = θ(n log n)
o Average case Time Complexity = θ(n2)
o Worst case Time Complexity = θ(n2)
11 CS KTU Lectures
▪ Example: Apply different Bin packing approximation algorithms on the following items
with bin capacity=10. Assuming the sizes of the items be {5, 7, 5, 2, 4, 2, 5, 1, 6}.
▪ Solution
• Minimum number of bins >= Ceil ((Total Weight) / (Bin Capacity))
= Ceil (37 / 10) = 4
• Next Fit
12 CS KTU Lectures
Number of bins required = 6
• First Fit
13 CS KTU Lectures
Number of bins required = 5
• Best Fit
14 CS KTU Lectures
Number of bins required = 5
• Worst Fit
15 CS KTU Lectures
Number of bins required = 5
16 CS KTU Lectures
• First Fit Decreasing
o Arrange the items in the decreasing order of the weight
{7, 6, 5, 5, 5, 4, 2, 2, 1}
17 CS KTU Lectures
Number of bins required = 4
18 CS KTU Lectures
Number of bins required = 4
o Graph Coloring
▪ Different Graph coloring problems
• Vertex coloring
• Edge coloring
• Face coloring
▪ Vertex coloring
• Assignment of colors to vertices in a graph such that no two adjacent vertices share
the same color
• A graph is 0-colorable iff V = ∅
• A graph is 1-colorable iff E = ∅
2 Colorable graph
χ (G) = 2
19 CS KTU Lectures
3 Colorable graph
χ (G) = 3
• Chromatic Number: It is the minimum number of colours with which a graph can be
coloured.
o χ(G) = 1 , if G is a null graph. A null graph is a graph that contains vertices but no
edges.
o All other graphs χ (G) >= 2.
o Four Color Theorem: For Every Planar graph, the chromatic number is less than
or equal to 4.
• A graph is k-colorable if it has k colors.
• A graph whose chromatic number is k, then it is called k-chromatic graph.
• A subset of vertices assign to the same color is called a color class.
20 CS KTU Lectures
▪ Edge coloring
• Given a graph G=(V,E), assign a color to each edges so that no two adjacent edges
share the same color
▪ Face Coloring
• For a planar graph, assign a color to each face/region so that no two faces that shares
boundary have the same color.
Algorithm Approximate_Graph_Coloring(G, n)
{
for i=1 to n do
{
for c=1 to n do
{
If no vertex adjacent to vi has color c
{
Color vi with c
Break
}
}
}
}
21 CS KTU Lectures
• Time Complexity = O(n3)
▪ Applications of graph coloring
• Prepare time table
• Scheduling
• Register allocation
• Mobile radio frequency assignment
• Map coloring
• Randomized Algorithm
o Deterministic Algorithm: The output as well as the running time are functions of the input
only.
o Randomized Algorithm: The output or the running time are functions of the input and
random bits chosen
▪ An algorithm that uses random numbers to decide what to do next anywhere in its logic
is called Randomized Algorithm
▪ Typically, this randomness is used to reduce time complexity or space complexity in
other standard algorithms
▪ The computer is not capable of generating truly random numbers
• The computer can only generate pseudorandom numbers-numbers that are generated
by a formula
• Pseudorandom numbers look random, but are perfectly predictable if you know the
formula
• Pseudorandom numbers are not used for security applications
• Devices for generating truly random numbers do exist. They are based on
radioactive decay, or on lava lamps
▪ It hopes to achieve good performance in the "average case" over all possible choices of
random bits.
22 CS KTU Lectures
o Type of Randomized Algorithms
Algorithm findingA_LV(A, n)
{
repeat
{
Randomly choose one element out of n elements
}until(‘a’ is found)
}
▪ This algorithm succeeds with probability 1. The number of iterations varies and can
be arbitrarily large, but the expected number of iterations is
𝑖
lim ∑𝑛 =2
𝑛→𝛼 𝑖=1 2i
▪ The expected number of trials before success is 2.
▪ Therefore the time complexity = O(1)
23 CS KTU Lectures
▪ Monte Carlo algorithm
Algorithm findingA_MC(A, n, k )
k - trails
{
i=0;
repeat
{
Randomly select one element out of n elements
i=i+1;
}until(i=k or ‘a’ is found);
}
• If an „a‟ is found, the algorithm succeeds, else the algorithm fails. After k iterations,
the probability of finding an „a‟ is Pr[find a] = 1-(1/2)k
• This algorithm does not guarantee success, but the run time is bounded. The number
of iterations is always less than or equal to k.
• Therefore the time complexity = O(k)
24 CS KTU Lectures
2.4. Let n = (high-low+1). If sc >= n/4 and gc >= n/4, then x is a central pivot.
3. Partition A[low..high] into two subarrays. The first subarray has all the elements of A
that are less than x and the second subarray has all those that are greater than x. Now
the index of x be pos.
4. randQuickSort(A, low, pos-1)
5. randQuickSort(A, pos+1, high)
•How many times while loop runs before finding a central pivot?
o The probability that the randomly chosen element is central pivot is 1/n.
o Therefore, expected number of times the while loop runs is n.
o Thus, the expected time complexity of step 2 is O(n).
• What is overall Time Complexity in Worst Case?
o In worst case, each partition divides array such that one side has n/4 elements
and other side has 3n/4 elements. The worst case height of recursion tree is
log3/4 n which is O(log n).
o T(n) < T(n/4) + T(3n/4) + O(n)
o T(n) < 2T(3n/4) + O(n)
o Solution of above recurrence is O(n log n)
o Advantage:
▪ For many problems, a randomized algorithm is the simplest and the fastest
▪ Many NP-hard/NP Complete problems can be easily solvable
25 CS KTU Lectures