Algoanalysis Lecture 1 - 2
Algoanalysis Lecture 1 - 2
and Design
1
Algorithm
• An algorithm is a set of instructions to be followed to
solve a problem.
– There can be more than one solution (more than one
algorithm) to solve a given problem.
– An algorithm can be implemented using different
programming languages on different platforms.
• An algorithm must be correct. It should correctly solve
the problem.
– e.g. For sorting, this means even if (1) the input is already
sorted, or (2) it contains repeated elements.
• Once we have a correct algorithm for a problem, we
have to determine the efficiency of that algorithm.
2
Algorithmic Performance
There are two aspects of algorithmic performance:
• Time
• Instructions take time.
• How fast does the algorithm perform?
• What affects its runtime?
• Space
• Data structures take space
• What kind of data structures can be used?
• How does choice of data structure affect the runtime?
➢ We will focus on time:
– How to estimate the time required for an algorithm
– How to reduce the time required
3
Analysis of Algorithms
• Analysis of Algorithms is the area of computer science that
provides tools to analyze the efficiency of different methods of
solutions.
• How do we compare the time efficiency of two algorithms that
solve the same problem?
Naïve Approach: implement these algorithms in a programming
language (C++), and run them to compare their time
requirements. Comparing the programs (instead of algorithms)
has difficulties.
– How are the algorithms coded?
• Comparing running times means comparing the implementations.
• We should not compare implementations, because they are sensitive to programming
style that may cloud the issue of which algorithm is inherently more efficient.
– What computer should we use?
• We should compare the efficiency of the algorithms independently of a particular
computer.
– What data should the program use?
• Any analysis must be independent of specific data.
4
Analysis of Algorithms
• When we analyze algorithms, we should employ
mathematical techniques that analyze algorithms
independently of specific implementations,
computers, or data.
• To analyze algorithms:
– First, we start to count the number of significant
operations in a particular solution to assess its
efficiency.
– Then, we will express the efficiency of algorithms using
growth functions.
5
The Execution Time of Algorithms
• Each operation in an algorithm (or a program) has a cost.
➔ Each operation takes a certain of time.
count = count + 1;
➔ take a certain amount of time, but it is constant
A sequence of operations:
count = count + 1;
➔ Cost: c1
sum = sum + count;
➔ Cost: c2
➔ Total Cost = c1 + c2
6
The Execution Time of Algorithms (cont.)
Example: Simple If-Statement
Cost Times
if (n < 0) c1 1
absval = -n c2 1
else
absval = n; c3 1
Total Cost <= c1 + max(c2,c3)
7
The Execution Time of Algorithms (cont.)
Example: Simple Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}
8
The Execution Time of Algorithms (cont.)
Example: Nested Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
Cost Times
i = i + 1; c8 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
➔ The time required for this algorithm is proportional to n2
General Rules for Estimation
• Loops: The running time of a loop is at most the running time
of the statements inside of that loop times the number of
iterations.
• Nested Loops: Running time of a nested loop containing a
statement in the inner most loop is the running time of statement
multiplied by the product of the sized of all loops.
• Consecutive Statements: Just add the running times of those
consecutive statements.
• If/Else: Never more than the running time of the test plus the
larger of running times of S1 and S2.
10
Algorithm Growth Rates
• We measure an algorithm’s time requirement as a function of the
problem size.
– Problem size depends on the application: e.g. number of elements in a list for a
sorting algorithm, the number users for a social network search.
• So, for instance, we say that (if the problem size is n)
– Algorithm A requires 5*n2 time units to solve a problem of size n.
– Algorithm B requires 7*n time units to solve a problem of size n.
• The most important thing to learn is how quickly the algorithm’s
time requirement grows as a function of the problem size.
– Algorithm A requires time proportional to n2.
– Algorithm B requires time proportional to n.
• An algorithm’s proportional time requirement is known as growth
rate.
• We can compare the efficiency of two algorithms by comparing
their growth rates.
11
Algorithm Growth Rates (cont.)
12
Common Growth Rates
Function Growth Rate Name
c Constant
log N Logarithmic
log2 N Log-squared
N Linear
N log N Log-linear
N2 Quadratic
N3 Cubic
2N Exponential
Running Times for Small Inputs
Running time
Input size (x = n)
14
Running Times for Large Inputs
Running time
Input size (x = n)
15
Popular Notations in Complexity Analysis of
Algorithms
1. Big-O Notation (Mostly used)
• We define an algorithm’s worst-case time complexity by using the Big-O
notation, which determines the set of functions grows slower than or at the
same rate as the expression. Furthermore, it explains the maximum amount of
time an algorithm requires to consider all input values.
2. Omega Notation (Very Rarely used)
• It defines the best case of an algorithm’s time complexity, the Omega notation
defines whether the set of functions will grow faster or at the same rate as the
expression. Furthermore, it explains the minimum amount of time an
algorithm requires to consider all input values.
3. Theta Notation (Rarely used)
• It defines the average case of an algorithm’s time complexity, the Theta
notation defines when the set of functions lies in both O(expression) and
Omega(expression), then Theta notation is used. This is how we define a time
complexity average case for an algorithm.
16
Popular Notations in Complexity Analysis of
Algorithms
1. Worst Case Analysis (Mostly used)
• In the worst-case analysis, we calculate the upper bound
on the running time of an algorithm. We must know the
case that causes a maximum number of operations to be
executed.
• Example: For Linear Search, the worst case happens
when the element to be searched (x) is not present in the
array. When x is not present, the search() function
compares it with all the elements of arr[] one by one.
Therefore, the worst-case time complexity of the linear
search would be O(n).
17
Popular Notations in Complexity Analysis of
Algorithms
2. Best Case Analysis (Very Rarely used)
• In the best-case analysis, we calculate the lower bound
on the running time of an algorithm. We must know the
case that causes a minimum number of operations to be
executed.
• Example: In the linear search problem, the best case
occurs when x is present at the first location. The
number of operations in the best case is constant (not
dependent on n). So time complexity in the best case
would be Ω(1).
18
Popular Notations in Complexity Analysis of
Algorithms
3. Average Case Analysis (Rarely used)
• In average case analysis, we take all possible inputs and
calculate the computing time for all of the inputs. “Sum
all the calculated values and divide the sum by the total number
of inputs”. We must know (or predict) the distribution of
cases.
• Example: For the linear search problem, let us assume that all
cases are uniformly distributed (including the case of x not being
present in the array). So we sum all the cases and divide the sum
by (n+1). Following is the value of average-case time complexity.
𝑛+1 ∗ 𝑛+2
𝑛 𝜃 𝑖 𝜃
2
\σ𝑖=1 = = 𝜃(𝑛)
𝑛+1 𝑛+1
Order-of-Magnitude Analysis and Big O
Notation
• Calculate the upper bound on the running time of an algorithm
• If Algorithm A requires time proportional to g(n), Algorithm A is
said to be order g(n), and it is denoted as O(g(n)).
• The function g(n) is called the algorithm’s growth-rate
function.
• Since the capital O is used in the notation, this notation is called
the Big O notation.
• If Algorithm A requires time proportional to n2, it is O(n2).
• If Algorithm A requires time proportional to n, it is O(n).
20
Definition of the Order of an Algorithm
Definition:
Algorithm A is order g(n) – denoted as O(g(n)) –
if constants k and n0 exist such that A requires
no more than k*g(n) time units to solve a problem
of size n n0. ➔ f(n) ≤ k*g(n) for all n n0
21
Definition of the Order of an Algorithm
Calculate the lower bound on the running time of an algorithm
Definition:
Algorithm A is omega g(n) – denoted as Ω(g(n)) –
if constants k and n0 exist such that A requires
more than k*g(n) time units to solve a problem
of size n n0. ➔ f(n) ≥ k*g(n) for all n n0
22
Definition of the Order of an Algorithm
Definition:
Algorithm A is theta g(n) – denoted as 𝚹(g(n)) –
if constants k1, k2 and n0 exist such that
k1*g(n) ≤ f(n) ≤ k2*g(n) for all n n0
(Drop the equalities and you get small o and small omega (ω) notations, respectively, e.g., f(n) is o(g(n)) if f(n)<kg(n))
23
Interesting information about asymptotic notations:
25
Order of an Algorithm (cont.)
26
Order of an Algorithm
• Show 2x + 17 is O(2x)
27
Order of an Algorithm
• Show 2x + 17 is O(3x)
• 2x + 17 ≤ k3x
• Easy to see that rhs grows faster than lhs over time → k=1
• However when x is small 17 will still dominate → skip over some
smaller values of x by using n0 = 2
• Hence k = 1 and n0 = 2
28
Order of an Algorithm
• Show f(n) = 7n2 + 1 is 𝚹(n2)
29
A Comparison of Growth-Rate Functions
30
Growth-Rate Functions
O(1) Time requirement is constant, and it is independent of the problem’s size.
O(log2n) Time requirement for a logarithmic algorithm increases increases slowly
as the problem size increases.
O(n) Time requirement for a linear algorithm increases directly with the size
of the problem.
O(n*log2n) Time requirement for a n*log2n algorithm increases more rapidly than
a linear algorithm.
O(n2) Time requirement for a quadratic algorithm increases rapidly with the
size of the problem.
O(n3) Time requirement for a cubic algorithm increases more rapidly with the
size of the problem than the time requirement for a quadratic algorithm.
O(2n) As the size of the problem increases, the time requirement for an
exponential algorithm increases too rapidly to be practical.
31
Growth-Rate Functions
• If an algorithm takes 1 second to run with the problem size 8,
what is the time requirement (approximately) for that algorithm
with the problem size 16?
• If its order is:
O(1) ➔ T(n) = 1 second
O(log2n) ➔ T(n) = (1*log216) / log28 = 4/3 seconds
O(n) ➔ T(n) = (1*16) / 8 = 2 seconds
O(n*log2n) ➔ T(n) = (1*16*log216) / 8*log28 = 8/3 seconds
O(n2) ➔ T(n) = (1*162) / 82 = 4 seconds
O(n3) ➔ T(n) = (1*163) / 83 = 8 seconds
O(2n) ➔ T(n) = (1*216) / 28 = 28 seconds = 256 seconds
32
Properties of Growth-Rate Functions
1. We can ignore low-order terms in an algorithm’s growth-rate
function.
– If an algorithm is O(n3+4n2+3n), it is also O(n3).
– We only use the higher-order term as algorithm’s growth-rate function.
33
How to Analyse Loops for Complexity Analysis
of Algorithms
The general steps to analyze loops for complexity analysis:
1. Determine the number of iterations of the loop. This is usually done by
analyzing the loop control variables and the loop termination condition.
2. Determine the number of operations performed in each iteration of the
loop. This can include both arithmetic operations and data access
operations, such as array accesses or memory accesses.
3. Express the total number of operations performed by the loop as a
function of the input size. This may involve using mathematical
expressions or finding a closed-form expression for the number of
operations performed by the loop.
4. Determine the order of growth of the expression for the number of
operations performed by the loop. This can be done by using
techniques such as big O notation or by finding the dominant term and
ignoring lower-order terms.
34
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(1):
The time complexity of a function (or set of statements) is
considered as O(1) if it doesn’t contain a loop, recursion, and call to
any other non-constant time function.
i.e. set of non-recursive and non-loop statements
In computer science, O(1) refers to constant time complexity, which
means that the running time of an algorithm remains constant and
does not depend on the size of the input. This means that the
execution time of an O(1) algorithm will always take the same
amount of time regardless of the input size.
An example of an O(1) algorithm is accessing an element in an array
using an index.
35
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(1):
// Here c is a constant
for (int i = 1; i <= c; i++) {
// some O(1) expressions
}
# Here c is a constant
for i in range(1, c+1):
# some O(1)
expressions
36
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(n):
The Time Complexity of a loop is considered as O(n) if the loop
variables are incremented/decremented by a constant amount.
For example following functions have O(n) time complexity.
Linear time complexity, denoted as O(n), is a measure of the
growth of the running time of an algorithm proportional to the size
of the input.
In an O(n) algorithm, the running time increases linearly with the
size of the input.
For example, searching for an element in an unsorted array or
iterating through an array and performing a constant amount of
work for each element would be O(n) operations. In simple words,
for an input of size n, the algorithm takes n steps to complete the
operation. 37
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(n):
38
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(nc)
The time complexity is defined as an algorithm whose performance is
directly proportional to the squared size of the input data, as in nested
loops it is equal to the number of times the innermost statement is
executed. For example, the following sample loops have O(n2) time
complexity
Quadratic time complexity, denoted as O(n2), refers to an algorithm
whose running time increases proportional to the square of the size of the
input.
In other words, for an input of size n, the algorithm takes n * n steps to
complete the operation.
An example of an O(n2) algorithm is a nested loop that iterates over the
entire input for each element, performing a constant amount of work for
each iteration. This results in a total of n * n iterations, making the
running time quadratic in the size of the input.
39
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(nc):
for (int i = 1; i <= n; i += c) { for i in range(1, n+1, c):
for (int j = 1; j <= n; j += c) { for j in range(1, n+1, c):
// some O(1) expressions
# some O(1) expressions
}
} for i in range(n, 0, -c):
for (int i = n; i > 0; i -= c) { for j in range(i+1, n+1, c):
for (int j = i + 1; j <= n; j += c) # some O(1) expressions
{
// some O(1) expressions
}
Example: Selection sort and Insertion Sort have O(n2) time complexity.
40
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(Log n):
41
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(Log n):
i =1
for (int i = 1; i <= n; i *= c) {
while(i <= n):
// some O(1) expressions
}
# some O(1) expressions
for (int i = n; i > 0; i /= c) { i = i*c
// some O(1) expressions i =n
} while(i > 0):
# some O(1) expressions
// Recursive function
i = i//c
void recurse(n)
{ # Recursive function
if (n == 0) def recurse(n):
return; if(n == 0):
else { return
// some O(1) expressions else:
} # some O(1) expressions
recurse(n - 1); recurse(n-1)
} 42
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(Log Log n):
43
How to Analyse Loops for Complexity Analysis
of Algorithms
Constant Time Complexity O(Log Log n):
// Here c is a constant greater than 1
for (int i = 2; i <= n; i = pow(i, c)) {
// some O(1) expressions
} # Here c is a constant greater than 1
// Here fun is sqrt or cuberoot
i = 2 or any
other constant root while(i <= n):
for (int i = n; i > 1; i = #fun(i)) { expressions
some O(1)
// some O(1) expressions i = i**c
}
# Here fun is sqrt or cuberoot or any other
constant root
i = n
while(i > 1):
# some O(1) expressions
i = fun(i)
44
How to Analyse Loops for Complexity Analysis
of Algorithms
Combine the time complexities of consecutive loops
45
How to Analyse Loops for Complexity Analysis
of Algorithms
Combine the time complexities of consecutive loops
Here, the outer loop performs n iterations, and the inner loop
performs m iterations for each iteration of the outer loop. So, the
total number of iterations performed by the inner loop is n * m, and
the total time complexity is O(n * m).
for i in range(n):
for j in range(m):
# some constant time operation
46
How to Analyse Loops for Complexity Analysis
of Algorithms
Combine the time complexities of consecutive loops
Here, the outer loop performs n iterations, and the inner loop
performs i iterations for each iteration of the outer loop, where i is
the current iteration count of the outer loop. The total number of
iterations performed by the inner loop can be calculated by summing
the number of iterations performed in each iteration of the outer
loop, which is given by the formula sum(i) from i=1 to n, which is
equal to n * (n + 1) / 2. Hence, the total time complex
for i in range(n):
for j in range(i):
# some constant time operation
47
How to Analyse Loops for Complexity Analysis
of Algorithms
Combine the time complexities of consecutive loops
48
How to Analyse Loops for Complexity Analysis
of Algorithms
Calculate time complexity when there are many if, else
statements inside loops?
50
Growth-Rate Functions – Example1
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}
51
Growth-Rate Functions – Example2
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}
T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
= (c5+c6+c7)*n2 + (c3+c4+c5+c8)*n + (c1+c2+c3)
= a*n2 + b*n + c
➔ So, the growth-rate function for this algorithm is O(n2)
52
Growth-Rate Functions – Example3
for (i=1; i<=n; i++)
x=x+1;
n i j
T(n) = ∑∑∑1
i=1 j= 1 k=1
54
Examples with their complexity analysis:
1. Linear search algorithm:
JAVA CODE
static int search(int arr[], int n, int x)
{
int i;
for (i = 0; i < n; i++) {
if (arr[i] == x) {
return i;
}
}
return -1;
}
PYTHON CODE
def search(arr, x):
for index, value in enumerate(arr):
if value == x:
return index
return -1
55
Examples with their complexity analysis: Cont.
1. Linear search algorithm:
Time Complexity Analysis: (In Big-O notation)
Best Case: O(1), This will take place if the element to be searched
is on the first index of the given list. So, the number of
comparisons, in this case, is 1.
Average Case: O(n), This will take place if the element to be
searched is on the middle index of the given list.
Worst Case: O(n), This will take place if:
The element to be searched is on the last index
The element to be searched is not present on the list
56
Examples with their complexity analysis:
2. In this example, we will take an array of length (n) and deals
with the following cases :
• If (n) is even then our output will be 0
• If (n) is odd then our output will be the sum of the elements of
the array.
JAVA CODE Python CODE
static int getsum(int arr[], int n) def getsum(arr, n):
{ if n % 2 == 0: # if (n) is even
if (n % 2 == 0) // if (n) is even return 0
{
return 0; Sum = 0
}
for i in range(n):
int sum = 0;
Sum += arr[i]
for (int i = 0; i < n; i++) {
return Sum # if (n) is odd
sum += arr[i];
}
return sum; // if (n) is odd
}
57
Examples with their complexity analysis: Cont.
2. Previous Example :
Time Complexity Analysis:
Best Case: The order of growth will be constant because in the best
case we are assuming that (n) is even.
Average Case: In this case, we will assume that even and odd are
equally likely, therefore Order of growth will be linear
Worst Case: The order of growth will be linear because in this
case, we are assuming that (n) is always odd.
58
Notes
• The worst case analysis of an algorithm provides an upper bound on the
running time of the algorithm for any input size.
• The average case analysis of an algorithm provides an estimate of the running
time of the algorithm for a random input.
• The best case analysis of an algorithm provides a lower bound on the running
time of the algorithm for any input size.
• The big O notation is commonly used to express the worst case running time
of an algorithm.
• Different algorithms may have different best, average, and worst case running
times. This technique allows developers to understand the performance of
algorithms under different scenarios, which can help in making informed
decisions about which algorithm to use for a specific task.
• Worst case analysis provides a guarantee on the upper bound of the running
time of an algorithm, which can help in designing reliable and efficient
algorithms.
• Average case analysis provides a more realistic estimate of the running time of
an algorithm, which can be useful in real-world scenarios.
59
Notes
Worst, Average, and Best Case Analysis of Algorithms is a technique
used to analyze the performance of algorithms under different
conditions.
Advantages:
• This technique allows developers to understand the performance
of algorithms under different scenarios, which can help in making
informed decisions about which algorithm to use for a specific
task.
• Worst case analysis provides a guarantee on the upper bound of
the running time of an algorithm, which can help in designing
reliable and efficient algorithms.
• Average case analysis provides a more realistic estimate of the
running time of an algorithm, which can be useful in real-world
scenarios.
60
Notes
Disadvantages:
• This technique can be time-consuming and requires a good
understanding of the algorithm being analyzed.
• Worst case analysis does not provide any information about the
typical running time of an algorithm, which can be a
disadvantage in real-world scenarios.
• Average case analysis requires knowledge of the probability
distribution of input data, which may not always be available.
61
Algorithms Cheat Sheet:
Algorithm Best Case Average Case Worst Case
Selection Sort 𝑂 𝑛2 𝑂 𝑛2 𝑂 𝑛2
Bubble Sort 𝑂(𝑛) 𝑂(𝑛2 ) 𝑂(𝑛2 )
Insertion Sort 𝑂(𝑛) 𝑂(𝑛2 ) 𝑂(𝑛2 )
Tree Sort 𝑂(𝑛𝑙𝑜𝑔𝑛) 𝑂(𝑛𝑙𝑜𝑔𝑛) 𝑂(𝑛2 )
Radix Sort 𝑂(𝑑𝑛) 𝑂(𝑑𝑛) 𝑂(𝑑𝑛)
when n=0
T(0) = c1
when n>0
T(n) = c1 + c2 + T(n-1) + c3 + c4 + T(n-1)
= 2*T(n-1) + (c1+c2+c3+c4)
T(n) = 2*T(n-1) + c
Recurrence equation for the growth-rate function of hanoi-towers algorithm
• Now, we have to solve this recurrence equation to find the growth-rate function
of hanoi-towers algorithm
65
Growth-Rate Functions – Hanoi Towers (cont.)
• There are many methods to solve recurrence equations, but we will use a simple
method known as repeated substitutions.
T(n) = 2*T(n-1) + c
= 2 * (2*T(n-2)+c) + c
= 2 * (2* (2*T(n-3)+c) + c) + c
= 23 * T(n-3) + (22+21+20)*c (assuming n>2)
when substitution repeated i-1th times
= 2i * T(n-i) + (2i-1+ ... +21+20)*c
when i=n
= 2n * T(0) + n−(21n-1+ ... +21+20)*c
= 2n * c1 + ( ∑ 2)*c i
2a = 2 + 4 + 8 + .. + 2n-1 + 2n
i= 0 a = 1 + 2 + 4 + 8 + .. + 2n-1
2a – a is your result 2n - 1
T(n) = T(n-1) + c
= (T(n-2) + c) + c
= (T(n-3) + c) + c + c
= (T(n-i) + i*c)
when i=n → T(0) + n*c ➔ T(n) = O(n)
Tracing the call fact (3)
N=0
if (N==0) true
return (1)
N=1 N=1
if (N==0) false if (N==0) false
return (1*fact(0)) return (1*fact(0))
N=2 N=2 N=2
if (N==0) false if (N==0) false if (N==0) false
return (2*fact(1)) return (2*fact(1)) return (2*fact(1))
N=3 N=3 N=3 N=3
if (N==0) false if (N==0) false if (N==0) false if (N==0) false
return (3*fact(2)) return (3*fact(2)) return (3*fact(2)) return (3*fact(2))
After original After 1st call After 2nd call After 3rd call
call 68
10/29/2023
Tracing the call fact (3)
N=1
if (N==0) false
return (1* 1)
N=2 N=2
if (N==0) false if (N==0) false
return (2*fact(1)) return (2* 1)
N=3 N=3 N=3
if (N==0) false if (N==0) false if (N==0) false
return (3*fact(2)) return (3*fact(2)) return (3* 2)
After return After return After return return 6
from 3rd call from 2nd call from 1st call 69
10/29/2023
More Recursive Algorithms
bool isPalindrome (string s) {
if (s.length() <= 1)
return true;
return (s[0]==s[s.length()-1])&&
isPalindrome(s.substr(1,s.length()-2));
}
T(n) = T(n-2) + c
= (T(n-4) + c) + c
= (T(n-6) + c) + c + c
= (T(n-i) + i/2*c)
when i=n → T(0) + n/2*c ➔ T(n) = O(n)
More Recursive Algorithms
}
More Recursive Algorithms
Successful Search:
Best-Case: item is in the first location of the array ➔O(1)
Worst-Case: item is in the last location of the array ➔O(n)
Average-Case: The number of key comparisons 1, 2, ..., n
n
∑i ( n 2 +n)/2
i=1
n
=
n ➔ O(n)
81
Binary Search
We can do binary search if the array is sorted:
n O(log2n)
16 4
64 6
256 8
1024 10
16,384 14
131,072 17
262,144 18
524,288 19
1,048,576 20
1,073,741,824 30
84