DAA-(Design and
Analysis of an
Algorithm)
BY: GUNJAN SAXENA
Syllabus- Unit-1
Introduction: Algorithms
Analysing Algorithms
Complexity of Algorithms
Growth of Functions, Performance Measurements,
Sorting and Order Statistics –
Shell Sort,
Quick Sort,
Merge Sort,
Heap Sort,
Comparison of Sorting Algorithms, Sorting in Linear Time
Introduction: Algorithm
In general, Algorithm is a set of steps to While, in computers
complete a task.
“Algorithm is a set of instructions used to solve a
These steps tells us how to do some task. particular problem or to perform a task that is
described precisely enough that a computer can
For example, run it’’.
• When you cook, you follow a recipe to For example,
prepare a dish.
Given Task: to make a cup of tea.
• If you play a game, you are devising
strategies to help you win. Algorithm:
• add water and milk to the kettle,
• boil it, add tea leaves,
• Add sugar, and then serve it in cup
Algorithm: Characteristics
Every algorithm must satisfy the following criteria:
• Input: there are zero/more quantities, externally supplied;
• Output: at least one quantity is produced;
• Finiteness: if we trace out the instructions of an algorithm, then for all cases the
algorithm will terminate after a finite number of steps;
• Definiteness: each instruction must be clear and unambiguous;
• Effectiveness: every instruction must be sufficiently basic and Every statement in the
algorithm should perform some operation. It is not enough that each operation be definite,
but it must also be feasible.
• Correctness: Algorithms must produce correct result.
So, we can say that-
• An Algorithm is a finite sequence of instructions, each of which has a
clear meaning and can be performed with a finite amount of effort in a
finite length of time.
• No matter what the input values may be, an algorithm terminates after
executing a finite number of instructions.
Steps Required to Construct an Algorithm for solving the Problem:-
1. Problem Definition ( Knowing the problem clearly )
2. Choose the approach for Designing Algorithm ( Divide and Conquer, Greedy
Technique and Dynamic Programming…)
3. Create Flowchart/Pseudocode
4. Verification
5. Analyzing Algorithm
6. Implementation/Coding for the chosen algorithm
Analysis of Algorithms
To solve a particular problem, if more than one algorithm is possible,
then best one will be decided by analyzing/comparing all possible
algorithms on the basis of below factors:
1. Time (CPU Time): The amount of time required by the algorithm is
called Time Complexity.
2. Space (Main Memory space): The amount of space/memory
required by the algorithm is called as space complexity.
Thus, Analysis of algorithms is the determination of the amount of time and space
resources required to execute it.
Growth of Functions
The growth of functions is directly related to the running time complexity of algorithms. When studying the
running time complexity of an algorithm, we are concerned with determining the growth in the running time
of algorithm (number of operations required by the algorithm) as problem size (denoted as n) increases.
Types of running time Analysis
1) Worst Case Analysis (Usually Done)
In the worst case analysis, we calculate upper bound on running time of an algorithm. We must
know the case that causes maximum number of operations to be executed.
2) Average Case Analysis (Sometimes done)
In average case analysis, we take all possible inputs and calculate computing time for all of the
inputs. Sum all the calculated values and divide the sum by total number of inputs.
3) Best Case Analysis (rare)
In the best case analysis, we calculate lower bound on running time of an algorithm. We must know
the case that causes minimum number of operations to be executed.
Asymptotic notations
Asymptotic notations are used to write fastest and slowest possible asymptotic
running time for an algorithm and is defined in terms of functions whose
domains are the set of natural numbers.
Why is Asymptotic Notation Important?
They give simple characteristics of an algorithm's efficiency.
They allow the comparisons of the performances of various
algorithms.
Focus on important terms by abstracting away low-order terms and constant
factors.
It describes the rate of growth of functions.
Types of Asymptotic Notations
1. O-notation
O-notation pronounced "big-oh notation", is used to describe the
asymptotic upper bound of an algorithm.
Formally, it is defined as:
For a given function, f(n) and g(n), we denote f(n)=O(g(n)) , if there
exist positive constants c and n0 such that 0≤f(n)≤cg(n) for all n≥ n0
.
Less formally, this means that for all sufficiently big n, the running
time(growth rate) of the algorithm or f(n) is less than g(n) multiplied
by some constant.
Even less formally, if f(n)=O( n2 ) then we can say that f(n) runs no
slower than n2 .
QUESTION # 1
⦿F(n) = 2n + 5, find g(n), C and no
By the definition of big Oh: f(n) = Og(n) iff f(n) ≤ C.g(n) for all n ≥ no
2n + 5≤ 3g(n) say C = 3
2n + 5 ≤ 3n
Highest power of f(n)
For n =1 : 2(1) + 5 ≤ 3 = 7 ≤ 3 False!
For n =2 : 2*2 + 5 ≤ 3*2 = 9 ≤ 6 False!
For n =3 : 2*3 + 5 ≤ 3*3 = 11 ≤ 9 False!
For n =4 : 2*4 + 5 ≤ 3*4 = 13 ≤ 12 False!
For n =5 : 2*5 + 5 ≤ 3*5 = 15 ≤ 15 TRUE!
WE can say f(n) = O(n) for C=3 and n >=5
2. Ω-notation
Ω-notation, pronounced "big-omega notation", is used to describe
the asymptotic lower bound of an algorithm. Formally, it is defined
as:
For given functions f(n) and g(n), we denote f(n)= Ω(g(n)) ,if there
exist positive constants c and n0 such that 0≤cg(n)≤f(n) for all n≥ n0
.
Less formally, this means that for all sufficiently big n, the running
time (growth rate) of the algorithm or f(n) is greater than g(n)
multiplied by some constant.
Even less formally, if f(n)=Ω( n2 ) then we can say that f(n) runs no
faster than n2
3. Θ-notation
It indicates the average bound of an algorithm. It
represents the average case of an algorithm’s time
complexity.
It says that growth rate of f(n) is equals to that of g(n) i.e.
it bounds the function both from the above and below.
f(n) is said to be asymptotically tight bound by the g(n).
For a given function, f(n) and g(n), we denote f(n)= Θ(g(n)),
if there exist positive constants c1 , c2 , and n0 such that
0 ≤ c1 g(n) ≤ f(n) ≤ c2 g(n) for all n≥ n0 .
4. Little Oh (o) Notation
5. Little Omega (ω) Notation
Calculating Time Complexity
In order to analyze (calculating running time) iterative programs, we have to count
number of times a loop is going to get executed.
For analyzing Recursive algorithms, we can use Iteration method/Recurrence Tree
method/Master’s method or Substitution method.
In case, algorithm does not contain either iteration or recursion then it means
that there is no dependency of the running time on the input size. Whatever is the
input size, running time is constant [O(1)].
Examples for calculating Time Complexity of
Iterative programs
RECURSION
To solve a particular problem, if a function is calling itself is called as Recursion. It is used
when Solving the bigger problem in terms of smaller problems.
Example: calculating Factorial of 6.
f(6) = 6 * f(5) = 6 * 5 * f(4) = 30 * 4 * f(3) = 120 * 3 * f(2) = 360 * f(1)
To execute the recursion program, we are using stack Data Structure.
Every recursive program should have termination condition otherwise it will get into infinite loop
and at the end you will get message stack overflow.
In recursion, when one function makes call to another function call ,
Number of parameters will not change,
parameter’s name will not change,
code will not change
But, parameter values will change.
Example 1: Write a Recursive C Program & Recurrence Relation to multiply 2
positive numbers m & n where m, n > 0 .
Methods for solving Recurrence
Relations
1. Iteration Method
2. Substitution Method
3. Master’s Method
4. Recurrence Tree Method
1. Iteration Method
In this method, we first convert the recurrence into a summation.
We do so by iterating the recurrence until the initial condition is reached.
For eg.
we have recurrence as-
For converting the recurrence into a summation, we would break T(n) into T(n/2), then into T(n/4)
& so on…….to reach the Base case.
T(1) denotes the time taken in the Base Case or initial condition.
After that,
n n
We back-substituted the eq (value of k) to express the eq in the form of n and initial condition.