0% found this document useful (0 votes)
30 views78 pages

Algoanalysis Lecture 1 - 2

Uploaded by

mohamedtraka321
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views78 pages

Algoanalysis Lecture 1 - 2

Uploaded by

mohamedtraka321
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Algorithm Analysis

and Design

1
Algorithm
• An algorithm is a set of instructions to be followed to
solve a problem.
– There can be more than one solution (more than one
algorithm) to solve a given problem.
– An algorithm can be implemented using different
programming languages on different platforms.
• An algorithm must be correct. It should correctly solve
the problem.
– e.g. For sorting, this means even if (1) the input is already
sorted, or (2) it contains repeated elements.
• Once we have a correct algorithm for a problem, we
have to determine the efficiency of that algorithm.
2
Algorithmic Performance
There are two aspects of algorithmic performance:
• Time
• Instructions take time.
• How fast does the algorithm perform?
• What affects its runtime?
• Space
• Data structures take space
• What kind of data structures can be used?
• How does choice of data structure affect the runtime?
➢ We will focus on time:
– How to estimate the time required for an algorithm
– How to reduce the time required

3
Analysis of Algorithms
• Analysis of Algorithms is the area of computer science that
provides tools to analyze the efficiency of different methods of
solutions.
• How do we compare the time efficiency of two algorithms that
solve the same problem?
Naïve Approach: implement these algorithms in a programming
language (C++), and run them to compare their time
requirements. Comparing the programs (instead of algorithms)
has difficulties.
– How are the algorithms coded?
• Comparing running times means comparing the implementations.
• We should not compare implementations, because they are sensitive to programming
style that may cloud the issue of which algorithm is inherently more efficient.
– What computer should we use?
• We should compare the efficiency of the algorithms independently of a particular
computer.
– What data should the program use?
• Any analysis must be independent of specific data.
4
Analysis of Algorithms
• When we analyze algorithms, we should employ
mathematical techniques that analyze algorithms
independently of specific implementations,
computers, or data.

• To analyze algorithms:
– First, we start to count the number of significant
operations in a particular solution to assess its
efficiency.
– Then, we will express the efficiency of algorithms using
growth functions.

5
The Execution Time of Algorithms
• Each operation in an algorithm (or a program) has a cost.
➔ Each operation takes a certain of time.

count = count + 1;
➔ take a certain amount of time, but it is constant
A sequence of operations:
count = count + 1;
➔ Cost: c1
sum = sum + count;
➔ Cost: c2
➔ Total Cost = c1 + c2

6
The Execution Time of Algorithms (cont.)
Example: Simple If-Statement

Cost Times
if (n < 0) c1 1
absval = -n c2 1
else
absval = n; c3 1
Total Cost <= c1 + max(c2,c3)

7
The Execution Time of Algorithms (cont.)
Example: Simple Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}

Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5


➔ The time required for this algorithm is proportional to n

8
The Execution Time of Algorithms (cont.)
Example: Nested Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
Cost Times
i = i + 1; c8 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
➔ The time required for this algorithm is proportional to n2
General Rules for Estimation
• Loops: The running time of a loop is at most the running time
of the statements inside of that loop times the number of
iterations.
• Nested Loops: Running time of a nested loop containing a
statement in the inner most loop is the running time of statement
multiplied by the product of the sized of all loops.
• Consecutive Statements: Just add the running times of those
consecutive statements.
• If/Else: Never more than the running time of the test plus the
larger of running times of S1 and S2.

10
Algorithm Growth Rates
• We measure an algorithm’s time requirement as a function of the
problem size.
– Problem size depends on the application: e.g. number of elements in a list for a
sorting algorithm, the number users for a social network search.
• So, for instance, we say that (if the problem size is n)
– Algorithm A requires 5*n2 time units to solve a problem of size n.
– Algorithm B requires 7*n time units to solve a problem of size n.
• The most important thing to learn is how quickly the algorithm’s
time requirement grows as a function of the problem size.
– Algorithm A requires time proportional to n2.
– Algorithm B requires time proportional to n.
• An algorithm’s proportional time requirement is known as growth
rate.
• We can compare the efficiency of two algorithms by comparing
their growth rates.
11
Algorithm Growth Rates (cont.)

Time requirements as a function


of the problem size n

12
Common Growth Rates
Function Growth Rate Name
c Constant
log N Logarithmic
log2 N Log-squared
N Linear
N log N Log-linear
N2 Quadratic
N3 Cubic
2N Exponential
Running Times for Small Inputs
Running time

Input size (x = n)

14
Running Times for Large Inputs
Running time

Input size (x = n)

15
Popular Notations in Complexity Analysis of
Algorithms
1. Big-O Notation (Mostly used)
• We define an algorithm’s worst-case time complexity by using the Big-O
notation, which determines the set of functions grows slower than or at the
same rate as the expression. Furthermore, it explains the maximum amount of
time an algorithm requires to consider all input values.
2. Omega Notation (Very Rarely used)
• It defines the best case of an algorithm’s time complexity, the Omega notation
defines whether the set of functions will grow faster or at the same rate as the
expression. Furthermore, it explains the minimum amount of time an
algorithm requires to consider all input values.
3. Theta Notation (Rarely used)
• It defines the average case of an algorithm’s time complexity, the Theta
notation defines when the set of functions lies in both O(expression) and
Omega(expression), then Theta notation is used. This is how we define a time
complexity average case for an algorithm.
16
Popular Notations in Complexity Analysis of
Algorithms
1. Worst Case Analysis (Mostly used)
• In the worst-case analysis, we calculate the upper bound
on the running time of an algorithm. We must know the
case that causes a maximum number of operations to be
executed.
• Example: For Linear Search, the worst case happens
when the element to be searched (x) is not present in the
array. When x is not present, the search() function
compares it with all the elements of arr[] one by one.
Therefore, the worst-case time complexity of the linear
search would be O(n).
17
Popular Notations in Complexity Analysis of
Algorithms
2. Best Case Analysis (Very Rarely used)
• In the best-case analysis, we calculate the lower bound
on the running time of an algorithm. We must know the
case that causes a minimum number of operations to be
executed.
• Example: In the linear search problem, the best case
occurs when x is present at the first location. The
number of operations in the best case is constant (not
dependent on n). So time complexity in the best case
would be Ω(1).

18
Popular Notations in Complexity Analysis of
Algorithms
3. Average Case Analysis (Rarely used)
• In average case analysis, we take all possible inputs and
calculate the computing time for all of the inputs. “Sum
all the calculated values and divide the sum by the total number
of inputs”. We must know (or predict) the distribution of
cases.
• Example: For the linear search problem, let us assume that all
cases are uniformly distributed (including the case of x not being
present in the array). So we sum all the cases and divide the sum
by (n+1). Following is the value of average-case time complexity.
𝑛+1 ∗ 𝑛+2
𝑛 𝜃 𝑖 𝜃
2
\σ𝑖=1 = = 𝜃(𝑛)
𝑛+1 𝑛+1
Order-of-Magnitude Analysis and Big O
Notation
• Calculate the upper bound on the running time of an algorithm
• If Algorithm A requires time proportional to g(n), Algorithm A is
said to be order g(n), and it is denoted as O(g(n)).
• The function g(n) is called the algorithm’s growth-rate
function.
• Since the capital O is used in the notation, this notation is called
the Big O notation.
• If Algorithm A requires time proportional to n2, it is O(n2).
• If Algorithm A requires time proportional to n, it is O(n).

20
Definition of the Order of an Algorithm
Definition:
Algorithm A is order g(n) – denoted as O(g(n)) –
if constants k and n0 exist such that A requires
no more than k*g(n) time units to solve a problem
of size n  n0. ➔ f(n) ≤ k*g(n) for all n  n0

• The requirement of n  n0 in the definition of O(f(n)) formalizes


the notion of sufficiently large problems.
– In general, many values of k and n can satisfy this definition.

21
Definition of the Order of an Algorithm
Calculate the lower bound on the running time of an algorithm
Definition:
Algorithm A is omega g(n) – denoted as Ω(g(n)) –
if constants k and n0 exist such that A requires
more than k*g(n) time units to solve a problem
of size n  n0. ➔ f(n) ≥ k*g(n) for all n  n0

22
Definition of the Order of an Algorithm
Definition:
Algorithm A is theta g(n) – denoted as 𝚹(g(n)) –
if constants k1, k2 and n0 exist such that
k1*g(n) ≤ f(n) ≤ k2*g(n) for all n  n0
(Drop the equalities and you get small o and small omega (ω) notations, respectively, e.g., f(n) is o(g(n)) if f(n)<kg(n))

23
Interesting information about asymptotic notations:

A) For some algorithms, all the cases (worst, best, average)


are asymptotically the same. i.e., there are no worst and
best cases.
Example: Merge Sort does Θ(n log(n)) operations in all cases.
B) Where as most of the other sorting algorithms have
worst and best cases.
Example 1: In the typical implementation of Quick Sort (where pivot
is chosen as a corner element), the worst occurs when the input
array is already sorted and the best occurs when the pivot
elements always divide the array into two halves.
Example 2: For insertion sort, the worst case occurs when the array
is reverse sorted and the best case occurs when the array is sorted
in the same order as output. 24
Order of an Algorithm
• If an algorithm requires f(n) = n2–3*n+10 seconds to solve a
problem size n. If constants k and n0 exist such that
k*n2 > n2–3*n+10 for all n  n0 .
the algorithm is order n2 (In fact, k is 3 and n0 is 2)
3*n2 > n2–3*n+10 for all n  2 .
Thus, the algorithm requires no more than k*n2 time units for n 
n0 ,
So it is O(n2)

25
Order of an Algorithm (cont.)

26
Order of an Algorithm
• Show 2x + 17 is O(2x)

• 2x + 17 ≤ 2x + 2x = 2*2x for x > 5


• Hence k = 2 and n0 = 5

27
Order of an Algorithm
• Show 2x + 17 is O(3x)

• 2x + 17 ≤ k3x
• Easy to see that rhs grows faster than lhs over time → k=1
• However when x is small 17 will still dominate → skip over some
smaller values of x by using n0 = 2
• Hence k = 1 and n0 = 2

28
Order of an Algorithm
• Show f(n) = 7n2 + 1 is 𝚹(n2)

• You need to show f(n) is O(n2) and f(n) is Ω(n2)


• f(n) is O(n2) because 7n2 + 1 ≤ 7n2 + n2 ∀n ≥ 1 ➔ k1 = 8 n0 = 1
• f(n) is Ω (n2) because 7n2 + 1 ≥ 7n2 ∀n ≥ 0 ➔ k2 = 7 n0 = 0
• Pick the largest n0 to satisfy both conditions naturally ➔ k1 = 8,
k2 = 7, n0 = 1

29
A Comparison of Growth-Rate Functions

30
Growth-Rate Functions
O(1) Time requirement is constant, and it is independent of the problem’s size.
O(log2n) Time requirement for a logarithmic algorithm increases increases slowly
as the problem size increases.
O(n) Time requirement for a linear algorithm increases directly with the size
of the problem.
O(n*log2n) Time requirement for a n*log2n algorithm increases more rapidly than
a linear algorithm.
O(n2) Time requirement for a quadratic algorithm increases rapidly with the
size of the problem.
O(n3) Time requirement for a cubic algorithm increases more rapidly with the
size of the problem than the time requirement for a quadratic algorithm.
O(2n) As the size of the problem increases, the time requirement for an
exponential algorithm increases too rapidly to be practical.

31
Growth-Rate Functions
• If an algorithm takes 1 second to run with the problem size 8,
what is the time requirement (approximately) for that algorithm
with the problem size 16?
• If its order is:
O(1) ➔ T(n) = 1 second
O(log2n) ➔ T(n) = (1*log216) / log28 = 4/3 seconds
O(n) ➔ T(n) = (1*16) / 8 = 2 seconds
O(n*log2n) ➔ T(n) = (1*16*log216) / 8*log28 = 8/3 seconds
O(n2) ➔ T(n) = (1*162) / 82 = 4 seconds
O(n3) ➔ T(n) = (1*163) / 83 = 8 seconds
O(2n) ➔ T(n) = (1*216) / 28 = 28 seconds = 256 seconds

32
Properties of Growth-Rate Functions
1. We can ignore low-order terms in an algorithm’s growth-rate
function.
– If an algorithm is O(n3+4n2+3n), it is also O(n3).
– We only use the higher-order term as algorithm’s growth-rate function.

2. We can ignore a multiplicative constant in the higher-order term


of an algorithm’s growth-rate function.
– If an algorithm is O(5n3), it is also O(n3).

3. O(f(n)) + O(g(n)) = O(f(n)+g(n))


– We can combine growth-rate functions.
– If an algorithm is O(n3) + O(4n), it is also O(n3 +4n2) ➔ So, it is O(n3).
– Similar rules hold for multiplication.

33
How to Analyse Loops for Complexity Analysis
of Algorithms
The general steps to analyze loops for complexity analysis:
1. Determine the number of iterations of the loop. This is usually done by
analyzing the loop control variables and the loop termination condition.
2. Determine the number of operations performed in each iteration of the
loop. This can include both arithmetic operations and data access
operations, such as array accesses or memory accesses.
3. Express the total number of operations performed by the loop as a
function of the input size. This may involve using mathematical
expressions or finding a closed-form expression for the number of
operations performed by the loop.
4. Determine the order of growth of the expression for the number of
operations performed by the loop. This can be done by using
techniques such as big O notation or by finding the dominant term and
ignoring lower-order terms.
34
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(1):
The time complexity of a function (or set of statements) is
considered as O(1) if it doesn’t contain a loop, recursion, and call to
any other non-constant time function.
i.e. set of non-recursive and non-loop statements
In computer science, O(1) refers to constant time complexity, which
means that the running time of an algorithm remains constant and
does not depend on the size of the input. This means that the
execution time of an O(1) algorithm will always take the same
amount of time regardless of the input size.
An example of an O(1) algorithm is accessing an element in an array
using an index.

35
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(1):

// Here c is a constant
for (int i = 1; i <= c; i++) {
// some O(1) expressions
}

# Here c is a constant
for i in range(1, c+1):
# some O(1)
expressions

36
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(n):
The Time Complexity of a loop is considered as O(n) if the loop
variables are incremented/decremented by a constant amount.
For example following functions have O(n) time complexity.
Linear time complexity, denoted as O(n), is a measure of the
growth of the running time of an algorithm proportional to the size
of the input.
In an O(n) algorithm, the running time increases linearly with the
size of the input.
For example, searching for an element in an unsorted array or
iterating through an array and performing a constant amount of
work for each element would be O(n) operations. In simple words,
for an input of size n, the algorithm takes n steps to complete the
operation. 37
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(n):

// Here c is a positive integer constant


for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
for (int i = n; i > 0; i -= c) {
// some O(1) expressions
}
# Here c is a positive integer constant
for i in range(1, n+1, c):
# some O(1) expressions
for i in range(n, 0, -c):
# some O(1) expressions

38
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(nc)
The time complexity is defined as an algorithm whose performance is
directly proportional to the squared size of the input data, as in nested
loops it is equal to the number of times the innermost statement is
executed. For example, the following sample loops have O(n2) time
complexity
Quadratic time complexity, denoted as O(n2), refers to an algorithm
whose running time increases proportional to the square of the size of the
input.
In other words, for an input of size n, the algorithm takes n * n steps to
complete the operation.
An example of an O(n2) algorithm is a nested loop that iterates over the
entire input for each element, performing a constant amount of work for
each iteration. This results in a total of n * n iterations, making the
running time quadratic in the size of the input.
39
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(nc):
for (int i = 1; i <= n; i += c) { for i in range(1, n+1, c):
for (int j = 1; j <= n; j += c) { for j in range(1, n+1, c):
// some O(1) expressions
# some O(1) expressions
}
} for i in range(n, 0, -c):
for (int i = n; i > 0; i -= c) { for j in range(i+1, n+1, c):
for (int j = i + 1; j <= n; j += c) # some O(1) expressions
{
// some O(1) expressions
}

Example: Selection sort and Insertion Sort have O(n2) time complexity.

40
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(Log n):

The time Complexity of a loop is considered as O(Logn) if the loop


variables are divided/multiplied by a constant amount. And also for
recursive calls in the recursive function, the Time Complexity is
considered as O(Logn).

41
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(Log n):
i =1
for (int i = 1; i <= n; i *= c) {
while(i <= n):
// some O(1) expressions
}
# some O(1) expressions
for (int i = n; i > 0; i /= c) { i = i*c
// some O(1) expressions i =n
} while(i > 0):
# some O(1) expressions
// Recursive function
i = i//c
void recurse(n)
{ # Recursive function
if (n == 0) def recurse(n):
return; if(n == 0):
else { return
// some O(1) expressions else:
} # some O(1) expressions
recurse(n - 1); recurse(n-1)
} 42
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(Log Log n):

The Time Complexity of a loop is considered as O(LogLogn) if the


loop variables are reduced/increased exponentially by a constant
amount.

43
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(Log Log n):
// Here c is a constant greater than 1
for (int i = 2; i <= n; i = pow(i, c)) {
// some O(1) expressions
} # Here c is a constant greater than 1
// Here fun is sqrt or cuberoot
i = 2 or any
other constant root while(i <= n):
for (int i = n; i > 1; i = #fun(i)) { expressions
some O(1)
// some O(1) expressions i = i**c
}
# Here fun is sqrt or cuberoot or any other
constant root
i = n
while(i > 1):
# some O(1) expressions
i = fun(i)

44
How to Analyse Loops for Complexity Analysis
of Algorithms
Combine the time complexities of consecutive loops

When there are consecutive loops, we calculate time complexity as a


sum of the time complexities of individual loops.

To combine the time complexities of consecutive loops,


1. you need to consider the number of iterations performed by each
loop and the amount of work performed in each iteration.
2. The total time complexity of the algorithm can be calculated by
multiplying the number of iterations of each loop by the time
complexity of each iteration and taking the maximum of all
possible combinations.

45
How to Analyse Loops for Complexity Analysis
of Algorithms
Combine the time complexities of consecutive loops

Here, the outer loop performs n iterations, and the inner loop
performs m iterations for each iteration of the outer loop. So, the
total number of iterations performed by the inner loop is n * m, and
the total time complexity is O(n * m).

for i in range(n):
for j in range(m):
# some constant time operation

46
How to Analyse Loops for Complexity Analysis
of Algorithms
Combine the time complexities of consecutive loops

Here, the outer loop performs n iterations, and the inner loop
performs i iterations for each iteration of the outer loop, where i is
the current iteration count of the outer loop. The total number of
iterations performed by the inner loop can be calculated by summing
the number of iterations performed in each iteration of the outer
loop, which is given by the formula sum(i) from i=1 to n, which is
equal to n * (n + 1) / 2. Hence, the total time complex
for i in range(n):
for j in range(i):
# some constant time operation

47
How to Analyse Loops for Complexity Analysis
of Algorithms
Combine the time complexities of consecutive loops

for i in range(1, m+1, c):


# some O(1) expressions

for i in range(1, n+1, c):


# some O(1) expressions

# Time complexity of above code is O(m)+O(n) which is O(m+n)


# If m == n, the time complexity becomes O(2n) which is O(n).

48
How to Analyse Loops for Complexity Analysis
of Algorithms
Calculate time complexity when there are many if, else
statements inside loops?

• The worst-case time complexity is the most useful among best,


average and worst. Therefore we need to consider the worst case.
We evaluate the situation when values in if-else conditions cause
a maximum number of statements to be executed.
• For example, consider the linear search function where we
consider the case when an element is present at the end or not
present at all.
• When the code is too complex to consider all if-else cases, we can
get an upper bound by ignoring if-else and other complex control
statements.
49
How to Analyse Loops for Complexity Analysis
of Algorithms
Calculate the time complexity of recursive functions

The time complexity of a recursive function can be written as a


mathematical recurrence relation.
To calculate time complexity, we must know how to solve
recurrences.
We will soon be discussing recurrence-solving techniques.

50
Growth-Rate Functions – Example1
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}

T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*c5


= (c3+c4+c5)*n + (c1+c2+c3)
= a*n + b
➔ So, the growth-rate function for this algorithm is O(n)

51
Growth-Rate Functions – Example2
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}
T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
= (c5+c6+c7)*n2 + (c3+c4+c5+c8)*n + (c1+c2+c3)
= a*n2 + b*n + c
➔ So, the growth-rate function for this algorithm is O(n2)
52
Growth-Rate Functions – Example3
for (i=1; i<=n; i++)

for (j=1; j<=i; j++)

for (k=1; k<=j; k++)

x=x+1;

n i j
T(n) = ∑∑∑1
i=1 j= 1 k=1

➔ So, the growth-rate function for this algorithm is O(n3)


53
Growth-Rate Functions – Examples4-5
• For nxn matrices, time complexity is: O(n3)

• How about this one? O(logn)

54
Examples with their complexity analysis:
1. Linear search algorithm:
JAVA CODE
static int search(int arr[], int n, int x)
{
int i;
for (i = 0; i < n; i++) {
if (arr[i] == x) {
return i;
}
}
return -1;
}

PYTHON CODE
def search(arr, x):
for index, value in enumerate(arr):
if value == x:
return index
return -1

55
Examples with their complexity analysis: Cont.
1. Linear search algorithm:
Time Complexity Analysis: (In Big-O notation)
Best Case: O(1), This will take place if the element to be searched
is on the first index of the given list. So, the number of
comparisons, in this case, is 1.
Average Case: O(n), This will take place if the element to be
searched is on the middle index of the given list.
Worst Case: O(n), This will take place if:
The element to be searched is on the last index
The element to be searched is not present on the list

56
Examples with their complexity analysis:
2. In this example, we will take an array of length (n) and deals
with the following cases :
• If (n) is even then our output will be 0
• If (n) is odd then our output will be the sum of the elements of
the array.
JAVA CODE Python CODE
static int getsum(int arr[], int n) def getsum(arr, n):
{ if n % 2 == 0: # if (n) is even
if (n % 2 == 0) // if (n) is even return 0
{
return 0; Sum = 0
}
for i in range(n):
int sum = 0;
Sum += arr[i]
for (int i = 0; i < n; i++) {
return Sum # if (n) is odd
sum += arr[i];
}
return sum; // if (n) is odd
}

57
Examples with their complexity analysis: Cont.
2. Previous Example :
Time Complexity Analysis:
Best Case: The order of growth will be constant because in the best
case we are assuming that (n) is even.
Average Case: In this case, we will assume that even and odd are
equally likely, therefore Order of growth will be linear
Worst Case: The order of growth will be linear because in this
case, we are assuming that (n) is always odd.

58
Notes
• The worst case analysis of an algorithm provides an upper bound on the
running time of the algorithm for any input size.
• The average case analysis of an algorithm provides an estimate of the running
time of the algorithm for a random input.
• The best case analysis of an algorithm provides a lower bound on the running
time of the algorithm for any input size.
• The big O notation is commonly used to express the worst case running time
of an algorithm.
• Different algorithms may have different best, average, and worst case running
times. This technique allows developers to understand the performance of
algorithms under different scenarios, which can help in making informed
decisions about which algorithm to use for a specific task.
• Worst case analysis provides a guarantee on the upper bound of the running
time of an algorithm, which can help in designing reliable and efficient
algorithms.
• Average case analysis provides a more realistic estimate of the running time of
an algorithm, which can be useful in real-world scenarios.
59
Notes
Worst, Average, and Best Case Analysis of Algorithms is a technique
used to analyze the performance of algorithms under different
conditions.
Advantages:
• This technique allows developers to understand the performance
of algorithms under different scenarios, which can help in making
informed decisions about which algorithm to use for a specific
task.
• Worst case analysis provides a guarantee on the upper bound of
the running time of an algorithm, which can help in designing
reliable and efficient algorithms.
• Average case analysis provides a more realistic estimate of the
running time of an algorithm, which can be useful in real-world
scenarios.
60
Notes
Disadvantages:
• This technique can be time-consuming and requires a good
understanding of the algorithm being analyzed.
• Worst case analysis does not provide any information about the
typical running time of an algorithm, which can be a
disadvantage in real-world scenarios.
• Average case analysis requires knowledge of the probability
distribution of input data, which may not always be available.

61
Algorithms Cheat Sheet:
Algorithm Best Case Average Case Worst Case

Selection Sort 𝑂 𝑛2 𝑂 𝑛2 𝑂 𝑛2
Bubble Sort 𝑂(𝑛) 𝑂(𝑛2 ) 𝑂(𝑛2 )
Insertion Sort 𝑂(𝑛) 𝑂(𝑛2 ) 𝑂(𝑛2 )
Tree Sort 𝑂(𝑛𝑙𝑜𝑔𝑛) 𝑂(𝑛𝑙𝑜𝑔𝑛) 𝑂(𝑛2 )
Radix Sort 𝑂(𝑑𝑛) 𝑂(𝑑𝑛) 𝑂(𝑑𝑛)

Merge Sort 𝑂(𝑛𝑙𝑜𝑔𝑛) 𝑂(𝑛𝑙𝑜𝑔𝑛) 𝑂(𝑛𝑙𝑜𝑔𝑛)

Heap Sort 𝑂(𝑛𝑙𝑜𝑔𝑛) 𝑂(𝑛𝑙𝑜𝑔𝑛) 𝑂(𝑛𝑙𝑜𝑔𝑛)

Quick Sort 𝑂(𝑛𝑙𝑜𝑔𝑛) 𝑂(𝑛𝑙𝑜𝑔𝑛) 𝑂(𝑛2 )

Bucket Sort 𝑂(𝑛 + 𝑘) 𝑂(𝑛 + 𝑘) 𝑂(𝑛2 )


Counting Sort 𝑂(𝑛 + 𝑘) 𝑂(𝑛 + 𝑘) 𝑂(𝑛 + 𝑘) 62
Recursion
Growth-Rate Functions – Recursive Algorithms
Tower of Hanoi
void hanoi(int n, char source, char dest, char spare) { //Cost
if (n > 0) { //c1
hanoi(n-1, source, spare, dest); //c2
cout << "Move top disk from pole " << source //c3
<< " to pole " << dest << endl;
hanoi(n-1, spare, dest, source); //c4
}
}

• The time-complexity function T(n) of a recursive algorithm is


defined in terms of itself, and this is known as recurrence equation
for T(n).
• To find the growth-rate function for a recursive algorithm, we have
to solve its recurrence relation. 64
Growth-Rate Functions – Hanoi Towers
• What is the cost of hanoi(n,’A’,’B’,’C’)?

when n=0
T(0) = c1

when n>0
T(n) = c1 + c2 + T(n-1) + c3 + c4 + T(n-1)
= 2*T(n-1) + (c1+c2+c3+c4)
T(n) = 2*T(n-1) + c
Recurrence equation for the growth-rate function of hanoi-towers algorithm

• Now, we have to solve this recurrence equation to find the growth-rate function
of hanoi-towers algorithm

65
Growth-Rate Functions – Hanoi Towers (cont.)
• There are many methods to solve recurrence equations, but we will use a simple
method known as repeated substitutions.

T(n) = 2*T(n-1) + c
= 2 * (2*T(n-2)+c) + c
= 2 * (2* (2*T(n-3)+c) + c) + c
= 23 * T(n-3) + (22+21+20)*c (assuming n>2)
when substitution repeated i-1th times
= 2i * T(n-i) + (2i-1+ ... +21+20)*c
when i=n
= 2n * T(0) + n−(21n-1+ ... +21+20)*c
= 2n * c1 + ( ∑ 2)*c i
2a = 2 + 4 + 8 + .. + 2n-1 + 2n
i= 0 a = 1 + 2 + 4 + 8 + .. + 2n-1
2a – a is your result 2n - 1

= 2n * c1 + ( 2n-1 )*c = 2n*(c1+c) – c ➔ So, the growth rate function is O(2n)


66
More Recursive Algorithms
int fact(int n) {
if (n ==0)
return (1);
else
return (n * fact(n-1));
}

T(n) = T(n-1) + c
= (T(n-2) + c) + c
= (T(n-3) + c) + c + c
= (T(n-i) + i*c)
when i=n → T(0) + n*c ➔ T(n) = O(n)
Tracing the call fact (3)

N=0
if (N==0) true
return (1)
N=1 N=1
if (N==0) false if (N==0) false
return (1*fact(0)) return (1*fact(0))
N=2 N=2 N=2
if (N==0) false if (N==0) false if (N==0) false
return (2*fact(1)) return (2*fact(1)) return (2*fact(1))
N=3 N=3 N=3 N=3
if (N==0) false if (N==0) false if (N==0) false if (N==0) false
return (3*fact(2)) return (3*fact(2)) return (3*fact(2)) return (3*fact(2))
After original After 1st call After 2nd call After 3rd call
call 68

10/29/2023
Tracing the call fact (3)

N=1
if (N==0) false
return (1* 1)
N=2 N=2
if (N==0) false if (N==0) false
return (2*fact(1)) return (2* 1)
N=3 N=3 N=3
if (N==0) false if (N==0) false if (N==0) false
return (3*fact(2)) return (3*fact(2)) return (3* 2)
After return After return After return return 6
from 3rd call from 2nd call from 1st call 69

10/29/2023
More Recursive Algorithms
bool isPalindrome (string s) {
if (s.length() <= 1)
return true;
return (s[0]==s[s.length()-1])&&
isPalindrome(s.substr(1,s.length()-2));
}

T(n) = T(n-2) + c
= (T(n-4) + c) + c
= (T(n-6) + c) + c + c
= (T(n-i) + i/2*c)
when i=n → T(0) + n/2*c ➔ T(n) = O(n)
More Recursive Algorithms

int fib(int n) { //1 1 2 3 5 8 13 21 34 ..


if (n == 1) return 1; if (n == 2) return 1;
else
return fib(n-1) + fib(n-2);
}
T(n) = ?
More Recursive Algorithms

Count positive integers in an array recursively?


int countPos(int* A, int j) {

}
More Recursive Algorithms

Count positive integers in an array recursively?


int countPos(int* A, int j) { //1st call countPos(A, n-1)
if (j < 0) return 0;
if (A[j] > 0) return 1 + countPos(A, j-1);
else return 0 + countPos(A, j-1);
}
T(n) = T(n-1) + c ➔ T(n) = O(n) from slide 38.
Iterative version is also O(n).
What to Analyze
• An algorithm can require different times to solve different
problems of the same size.
– Eg. Searching an item in a list of n elements using sequential search. ➔ Cost:
1,2,...,n
• Worst-Case Analysis –The maximum amount of time that an
algorithm require to solve a problem of size n.
– This gives an upper bound for the time complexity of an algorithm.
– Normally, we try to find worst-case behavior of an algorithm.
• Best-Case Analysis –The minimum amount of time that an
algorithm require to solve a problem of size n.
– The best case behavior of an algorithm is NOT so useful.
• Average-Case Analysis –The average amount of time that an
algorithm require to solve a problem of size n.
– Sometimes, it is difficult to find the average-case behavior of an algorithm.
– We have to look at all possible data organizations of a given size n, and their
distribution probabilities of these organizations.
– Worst-case analysis is more common than average-case analysis. 80
Sequential Search
int sequentialSearch(const int a[], int item, int n){
for (int i = 0; i < n && a[i]!= item; i++);
if (i == n)
return –1;
return i;
}
Unsuccessful Search: ➔ O(n)

Successful Search:
Best-Case: item is in the first location of the array ➔O(1)
Worst-Case: item is in the last location of the array ➔O(n)
Average-Case: The number of key comparisons 1, 2, ..., n
n
∑i ( n 2 +n)/2
i=1
n
=
n ➔ O(n)

81
Binary Search
We can do binary search if the array is sorted:

int binarySearch(int a[], int size, int x) {


int low =0;
int high = size –1;
int mid; // mid will be the index of
// target when it’s found.
while (low <= high) {
mid = (low + high)/2;
if (a[mid] < x)
low = mid + 1;
else if (a[mid] > x)
high = mid – 1;
else
return mid;
}
return –1;
}
82
Binary Search – Analysis
• For an unsuccessful search:
– The number of iterations in the loop is log2n + 1
➔ O(log2n)
• For a successful search:
– Best-Case: The number of iterations is 1. ➔ O(1)
– Worst-Case: The number of iterations is log2n +1 ➔ O(log2n)
– Average-Case: The avg. # of iterations < log2n ➔ O(log2n)

0 1 2 3 4 5 6  an array with size 7


3 2 3 1 3 2 3  # of iterations
The average # of iterations = 17/7 = 2.4285 < log27

For a proof of the general case see:


http://raider.mountunion.edu/~cindricbb/sp06/340/avgbinsearch.html
83
How much better is O(log2n)?

n O(log2n)
16 4
64 6
256 8
1024 10
16,384 14
131,072 17
262,144 18
524,288 19
1,048,576 20
1,073,741,824 30

84

You might also like