DAA Module 2 Notes 2025
DAA Module 2 Notes 2025
Module-2: BCS401-Analysis
and Design of Algorithms
Contents
General methods, Merge sort, Quick sort, Binary Tree Traversals, Multiplication of Large
Integers and Strassen’s Matrix Multiplication.
1. General method
Divide and Conquer is one of the best-known general algorithm design technique. It works
according to the following general plan:
Given a function to compute on ‘n’ inputs the divide-and-conquer strategy suggests
splitting the inputs into ‘k’ distinct subsets, 1<k<=n, yielding ‘k’ sub problems.
These sub problems must be solved, and then a method must be found to combine sub
solutions into a solution of the whole.
If the sub problems are still relatively large, then the divide-and-conquer strategy can
possibly be reapplied.
Often the sub problems resulting from a divide-and-conquer design are of the same
type as the original problem. For those cases the reapplication of the divide-and-
conquer principle is naturally expressed by a recursive algorithm.
Problem of size n
Where,
T(n) is the time for divide and conquer method on any input of size n and
g(n) is the time to compute answer directly for small inputs.
The function f(n) is the time for dividing the problem ‘p’ and combining the solutions
to sub problems.
For divide and conquer based algorithms that produce sub problems of the same type as the
original problem, it is very natural to first describe them by using recursion.
More generally, an instance of size n can be divided into b instances of size n/b, with a of
them needing to be solved. (Here, a and b are constants; a>=1 and b > 1.). Assuming that
size n is a power of b(i.e. n=bk), to simplify our analysis, we get the following recurrence for
the running time T(n):
where f(n) is a function that accounts for the time spent on dividing the problem into smaller
ones and on combining their solutions.
The recurrence relation can be solved by i) substitution method or by using ii) master
theorem.
1. Substitution Method - This method repeatedly makes substitution for each
occurrence of the function T in the right - hand side until all such occurrences
disappear.
2. Master Theorem - The efficiency analysis of many divide-and-conquer algorithms is
greatly simplified by the master theorem. It states that, in recurrence equation:
T(n) = aT(n/b) + f (n), If f (n) Ꜫ Θ (nd) where d ≥ 0 then
3. Merge Sort
Merge sort is a perfect example of a successful application of the divide-and conquer
technique. It sorts a given array A [O ... n - 1] by dividing it into two halves A [0 .. [n/2]-1]
and A [ [n/2] .. n-1], sorting each of them recursively, and then merging the two smaller
sorted arrays into a single sorted one.
Example:
The operation of the algorithm on the list 8, 3, 2, 9, 7, 1, 5, 4 is illustrated in the figure
Analysis
Here the basic operation is key comparison. As merge sort execution does not depend on
the order of the data, best case and average case runtime are the same as worst case
runtime.
Worst case: During key comparison, neither of the two arrays becomes empty before the other
one contains just one Element leads to the worst case of merge. sort. Assuming for
simplicity that total number of elements n is a power of 2, the recurrence relation for the
number of key comparisons C(n) is
where, Cmerge(n) is the number of key comparisons made during the merging stage.
Let us analyze Cmerge(n), the number of key comparisons performed during the merging stage.
At each step, exactly one comparison is made, after which the total number of elements in the
two arrays still needing to be processed is reduced by 1. In the worst case, neither of the two
arrays becomes empty before the other one contains just one element (e.g., smaller elements
may come from the alternating arrays).Therefore, for the worst case, Cmerge(n) = n –1.
Now,
Advantages:
Number of comparisons performed is nearly optimal.
For large n, the numbeerer of comparisons made by this algorithm in the average case
turns out to be about 0.25n less and hence is also in Θ(n log n).
Mergesort will never degrade to O (n2)
Another advantage of mergesort over quicksort and heapsort is its stability. (A
sorting algorithm is said to be stable if two objects with equal keys appear in the same
order in sorted output as they appear in the input array to be sorted. )
Limitations:
The principal shortcoming of mergesort is the linear amount [O(n) ] of extra storage
the algorithm requires. Though merging can be done in-place, the resulting algorithm
is quite complicated and of theoretical interest only.
4. Quick sort:
Quicksort is the other important sorting algorithm that is based on the divide-and-conquer
approach. Unlike mergesort, which divides its input elements according to their position in the
array, quicksort divides (or partitions) them according to their value.
A partition is an arrangement of the array’s elements so that all the elements to the left of
some element A[s] are less than or equal to A[s], and all the elements to the right of A[s] are
greater than or equal to it:
Obviously, after a partition is achieved, A[s] will be in its final position in the sorted array,
and we can continue sorting the two subarrays to the left and to The right of A[s]
independently (e.g., by the same method).
In quick sort, the entire work happens in the division stage, with no work required to combine
the solutions to the subproblems.
Partitioning
We start by selecting a pivot—an element with respect to whose value we are going to divide
the subarray. There are several different strategies for selecting a pivot. We use
the sophisticated method suggested by C.A.R. Hoare, the prominent British computer
scientist who invented quicksort.
Select the subarray’s first element: p = A[l]. Now scan the subarray from both ends,
comparing the subarray’s elements to the pivot.
The left-to-right scan, denoted below by index pointer i, starts with the second
element. Since we want elements smaller than the pivot to be in the left part of the
subarray, this scan skips over elements that are smaller than the pivot and stops upon
encountering the first element greater than or equal to the pivot.
The right-to-left scan, denoted below by index pointer j, starts with the last element of
the subarray. Since we want elements larger than the pivot to be in the right part of the
subarray, this scan skips over elements that are larger than the pivot and stops on encountering
the first element smaller than or equal to the pivot.
After both scans stop, three situations may arise, depending on whether or not the scanning
indices have crossed.
1. If scanning indices i and j have not crossed, i.e., i< j, we simply exchange A[i] and
A[j ] and resume the scans by incrementing I and decrementing j, respectively:
2. If the scanning indices have crossed over, i.e., i> j, we will have partitioned the
subarray after exchanging the pivot with A[j]:
3. If the scanning indices stop while pointing to the same element, i.e., i = j, the value
they are pointing to must be equal to p. Thus, we have the subarray partitioned, with
the split position s = i = j :
We can combine this with the case-2 by exchanging the pivot with A[j] whenever i≥j
ALGORITHM HoarePartition(A[l..r])
//Partitions a subarray by Hoare’s algorithm, using the first element as a pivot
//Input: Subarray of array A[0..n − 1], defined by its left and right indices l and r (l<r)
//Output: Partition of A[l..r], with the split position returned as this function’s value
Note that index i can go out of the subarray’s bounds in this pseudocode.
Example: Example of quicksort operation. (a) Array’s transformations with pivots shown in
bold. (b) Tree of recursive calls to Quicksort with input values land r of subarray bounds and
split position s of a partition obtained.
Analysis
Best Case -Here the basic operation is key comparison. Number of key comparisons made
before a partition is achieved is n + 1 if the scanning indices cross over and n if they coincide.
If all the splits happen in the middle of corresponding subarrays, we will have the best case.
The number of key comparisons in the best case satisfies the recurrence,
According to the Master Theorem, Cbest(n) Ꜫ Θ(n log2 n); solving it exactly from = 2k yields
Cbest(n) = n log2 n.
Worst Case – In the worst case, all the splits will be skewed to the extreme: one of the two
subarrays will be empty, and the size of the other will be just 1 less than the size of the
subarray being partitioned. This unfortunate situation will happen, in particular, for increasing
arrays. Indeed, if A[0..n − 1] is a strictly increasing array and we use A[0] as the pivot, the
Left-to-right scan will stop on A[1] while the right-to-left scan will go all the way to reach
A[0], indicating the split at position 0:So, after making n + 1 comparisons to get to this
partition and exchanging the pivot A[0] with itself, the algorithm will be left with the strictly
increasing array A[1..n − 1] to sort. This sorting of strictly increasing arrays of diminishing
sizes will continue until the last one A[n−2.. n−1] has been processed. The total number of
key comparisons made will be equal to
Average Case - Let Cavg(n) be the average number of key comparisons made by quicksort on
a randomly ordered array of size n. A partition can happen in any position s (0 ≤ s ≤n−1) after
n+1comparisons are made to achieve the partition. After the partition, the left and right
subarrays will have s and n − 1−s elements, respectively. Assuming that the partition split can
happen in each position s with the same probability 1/n, we get the following recurrence
relation:
Its solution, which is much trickier than the worst- and best-case analyses, turns out to be
Thus, on the average, quicksort makes only 39% more comparisons than in the best case.
Moreover, its innermost loop is so efficient that it usually runs faster than mergesort on
randomly ordered arrays of nontrivial sizes. This certainly justifies the name given to the
algorithm by its inventor.
Variations: Because of quicksort’s importance, there have been persistent efforts over
theyears to refine the basic algorithm. Among several improvements discovered
byresearchers are:
Better pivot selection methods such as randomized quicksort that uses a random
element or the median-of-three method that uses the median of the leftmost,
rightmost, and the middle element of the array
Switching to insertion sort on very small subarrays (between 5 and 15 elements for
most computer systems) or not sorting small subarrays at all and finishing the
algorithm with insertion sort applied to the entire nearly sorted array
Modifications of the partitioning algorithm such as the three-way partition into
segments smaller than, equal to, and larger than the pivot
Limitations: 1. It is not stable. 2. It requires a stack to store parameters of subarrays that are
yet to be sorted. 3. While Performance on randomly ordered arrays is known to be sensitive
not only to the implementation details of the algorithm but also to both computer architecture
and data type.
Divide and Conquer method: Using the divide and conquer matrix multiplication is
achieved through multiplying the submatrices recursively.
In the above method, we do 8 multiplications for matrices of size n/2 x n/2 and 4 additions.
Addition of two matrices takes O(n2) time. So the time complexity can be written as T(n) =
8T(n/2) + O(n2) which happenn to be O(n3); same as the direct method
Divide and Conquer through Strassen’s Method: By using divide-and-conquer approach
proposed by Strassen in 1969, we can reduce the number of multiplications.
Multiplication of 2×2 matrices: The principal in sight of the algorithm lies in the discovery
that we can find the product C of two 2 × 2 matrices A and B with just 7 multiplications as
opposed to the eight required by the brute-force algorithm. This is accomplished by using the
following formulas:
where
Thus, to multiply two 2×2 matrices, Strassen’s algorithm makes seven multiplications and 18
additions/subtractions, whereas the brute-force algorithm requires eight multiplications and
four additions.
It is not difficult to verify that one can treat these submatrices as numbers to get the correct
product. For example, C00 can be computed as M1 + M4– M5 + M7 where M1, M4, M5, and M7
Are found by Strassen’s formulas, with the numbers replaced by the corresponding
submatrices. If t he seven products of n/2 × n/2 matrices are computed recursively by the
same method, we have Strassen’s algorithm for matrix multiplication.
This implies M(n) = Θ(n2.807)which is smaller than n3 required by the brute-force algorithm.
performance. Once a sub-problem fits in the cache, the standard recursive solution
reuses the cached data until the sub-problem has been completely solved.
It allows solving difficult and often impossible looking problems like the Tower of
Hanoi. It reduces the degree of difficulty since it divides the problem into sub
problems that are easily solvable, and usually runs faster than other algorithms would.
Another advantage to this paradigm is that it often plays a part in finding other
efficient algorithms, and in fact it was the central role in finding the quick sort and
merge sort algorithms.
Disadvantages
One of the most common issues with this sort of algorithm is the fact that the
recursion is slow, which in some cases outweighs any advantages of this divide and
conquer process.
Another concern with it is the fact that sometimes it can become more complicated
than a basic iterative approach, especially in cases with a large n. In other words, if
someone wanted to add a large amount of numbers together, if they just create a
simple loop to add them together, it would turn out to be a much simpler approach
than it would be to divide the number s up into two groups, add these groups
recursively, and then add the sums of the two groups together.
Another is Sub problem is broken down into sub Problems, and then Sub problem maymay may
occur multiple times.
Example:
As shown below, starting with A[1] and ending with A[n − 1], A[i] is inserted in its
appropriate place among the first i elements of the array that have been already sorted.
Iteration of insertion sort: A[i] is inserted in its proper position among the preceding elements
previously sorted.
Insertion sort Example, A vertical bar separates the sorted part of the array from the remaining
elements; the element being inserted is in bold
The basic operation of the algorithm is the key comparison A[j ]> v. The number of key
comparisons in this algorithm obviously depends on the nature of the input.
In the worst case, A[j ] > v is executed the largest number of times, i.e., for every j = i − 1,..., 0.
Since v = A[i], it happens if and only if A[j ] > A[i] for j = i − 1,..., 0. (Note that we are using
the fact that on the ith iteration of insertion sort all the elements preceding A[i] are the first i
elements in the input, albeit in the sorted order.) Thus, for the worst-case input, we get A[0]>
A[1] (for i = 1), A[1] > A[2] (for i = 2),...,A[n − 2] > A[n − 1] (for i = n − 1). In other words, the
worst-case input is an array of strictly decreasing values.
In the best case, the comparison A[j ] > v is executed only once on every iteration of the
outer loop. It happens if and only if A[i − 1] ≤ A[i] for every i = 1,...,n − 1, i.e., if the
input array is already sorted in non decreasing order. Thus, for sorted arrays, the number of key
comparisons.
Depth-first search and breadth-first search are principal traversal algorithms for traversing
digraphs as well, but the structure of corresponding forests can be more complex than for
undirected graphs. Thus, even for the simple example of Figure, the depth-first search forest
(Figure b) exhibits all four types of edges possible in a DFS forest of a directed graph:
Tree edges (ab, bc, de),
back edges (ba) from vertices to their ancestors,
forward edges (ac) from vertices to their descendants in the tree other than their
children, and
cross edges (dc), which are none of the aforementioned types.
Note that a back edge in a DFS forest of a directed graph can connect a vertex to its parent.
Whether or not it is the case, the presence of a back edge indicates that the digraph has a
directed cycle. A directed cycle in a digraph is a sequence of three or more of its vertices that
starts and ends with the same vertex and in which every vertex is connected to its immediate
predecessor by an edge directed from the predecessor to the successor. For example, a, b, a
is a directed cycle in the digraph in Figure given above. Conversely, if a DFS forest of
digraph has no back edges, the digraph is a dag, an acronym for directed acyclic graph.
Note: The solution obtained by the source-removal algorithm is different from the one
obtained by the DFS-based algorithm. Both of them are correct, of course; the topological
sorting problem may have several alternative solutions.
Applications of Topological Sorting
Instruction scheduling in program compilation
Cell evaluation ordering in spreadsheet formulas,
Resolving symbol dependencies in linkers.
Knapsack Problem
*******************************************************