0% found this document useful (0 votes)
29 views6 pages

Asymptotic Notation

A brief discussion on asymptotic-Notation

Uploaded by

tusher.shubhro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views6 pages

Asymptotic Notation

A brief discussion on asymptotic-Notation

Uploaded by

tusher.shubhro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Chapter 1

Asymptotic Notation

1.1 Introduction
Running time of an algorithm may depend on many factors. In one machine it may run
in 2s whereas on another machine it may take 3s. But the algorithm was implemented
the same way. Still the difference between the running time of the algorithm depended
on the machines capabilities. But it is totally impractical to think of every type and kind
of machines capability and then calculate the running time of an algorithm. And also to
calculate every single instruction running time is not an easy task to do. For example,
consider running time of an algorithm would take time T (n) (where n is number of
inputs).
T (n) = 100n2 + 323n + 89
It’s clearly difficult to calculate the running time of an algorithm so precisely. Ex-
cept the fact that this function also indicates that running time of the algorithm takes
“quadratic time” meaning as n increases, the running time of the algorithm increases
n times (meaning n2 ).

To make these all easy and understandable the asymptotic notation was introduced.
It ignores all the hardware and language details and concentrate only on the details of
actual algorithm. For example, suppose, we have n elements array arr. We have to
search a number t in the array arr. To find the number we have to look every element
in the array and compare them to the target t until we find it. If we write the pseudo
code, it may look something like this,
search (arr, t):
| for i in arr:
| | if i == t:
| | | return index(i)
| return -1
(Note: i is not index. It is actual element of arr. On each iteration, i will be assigned
with next element in arr. In pseudo code we won’t be following any specific language

1
2 CHAPTER 1. ASYMPTOTIC NOTATION

like syntax. It will be as simple as possible, so that any one can implement it in any
programming language.)

Explanation
If we found t we simply return the index where we found it. If we don’t find t, we simply
return −1 which is not a valid index.
Notice indentation of the code. Indentation will tell you which piece of code belongs
to which code block. The vertical bars before each line indicates, to which code block
the line belongs to. The if block belong to the for loop code block. The last return
statement, belongs to the search function block. Meaning the last return statement
doesn’t belong to the for loop block. From here on, we will not explain, which line
belongs to which code block and the vertical lines will not be shown either. So, please
try to understand the indentation level on your own.

Calculation of Running Time


Suppose, we have a 5 element array, arr = {1, 2, 3, 4, 5} and our target t = 5. To find
t, we have to search the whole array. Meaning we will have to search 5 times, because
5 is at index 5 (not 4, we are not using C. We will consider array index to start from
1). If we consider each iteration to take 1 unit of time, then this search operation will
take 5 units of time. If we stretch it further and think that the array arr contains n
elements and to search the target t = 5, it will take n units of time. Meaning to search
an element in an n element array, its running time T (n) is at most n or O(n). (worst case)

Now, suppose 5 was not at the 5th index rather it was at 1st index. Then the search
operation will take only 1 unit of time. Meaning it will take only a single comparison
operation to find the element. Meaning the search operations running time will be just
1. Again if we say, to search an element t in an array of n elements, its running time
T (n) is at least 1 or O(1)(constant time. (best case)

Now again, suppose 5 was at the middle of the array, meaning at 5+1 2 th = 3rd index,
then the search operation would take 3 units of time. In other words, to search and el-
ement t in an array of n elements, its running time T (n) is 1 ≤ T (n) ≤ n. (average case)

From above example and explanation, I hope we got a better understanding of run-
ning time of an algorithm. There may be three sort of case when we are calculating
running time of an algorithm.

• best case

• average case

• worst case
1.2. BIG OH NOTATION O(N ) 3

For these three cases, we have three types of asymptotic notation,

• Big Omega Notation Ω(n)

• Big Theta Notation Θ(n)

• Big Oh Notation O(n)

1.2 Big Oh Notation O(N )


Simple definition,

f (n) = O(g(n)), means there exists positive constants c and n0


for which f (n) is always ≤ c.g(n) , for large enough n(n ≥ n0 )

Set definition,

O(g(n)) = {f (n) : there exits constants c and n0 such that


0 ≤ f (n) ≤ c.g(n) for large enough n(n ≥ n0 )}

It means, there is a function f (n) belongs to the set O(g(n)) if there exists positive
constant c (c > 0) for all sufficiently large n (n ≥ n0 ). As O(g(n)) is a set, we can write
f (n) ∈ O(g(n)).

To understand better, I want you to think O(g(n)) is a set whose elements are O(g(n)) =
{1, 2, 3, 4, 5} units of time (based on n, the units are different in the set). Meaning, sup-
pose when n = 1, O(g(n)) = 1. For n = 2, O(g(n)) = 2. You get the idea. Now, let
f (n) = 4 and we can see, there is a element {4} in the set. Or we can say it like this
also, f (n) = 2 ∗ {2} or f (n) ≤ 2 ∗ {3}, where the 2 is the positive constant c. That means
clearly from mathematical view, we can say f (n) ∈ O(g(n)). But, we will see every
where the belongs to (∈) term is written as ”=” (meaning, written as f (n) = O(g(n))).
And the g(n) is merely a function of n. For example let’s say, g(n) = 23n+0.2 log2 n+23
is a function of n. For each value of n there will be a distinct value of g(n). All those
values of g(n) constitutes the set O(g(n)).

Now, let’s discuss the significance of saying ”*sufficiently large n (n ≥ n0 )*”. There
will always be some number of inputs stating from 0 to n0 , for which the algorithm’s
running time is almost same. Meaning, the difference between the running times of the
algorithm is very small. But, the running times start to differ grately when the input
number starts to get larger and larger than n0 . Asymptotic notation is for practical
usage. In practical usage, we won’t work with only 100 or 1000 inputs. We will be
working with tens of thousands of inputs. So, the asymptotic notation is applied to the
number of inputs that are greater or equal to n0 .
4 CHAPTER 1. ASYMPTOTIC NOTATION

In aymptotic notation we will always ignore the constants, coefficients and the lower
degree terms to get a very concise and easy to understand notation. For example,
1000n2 , 0.0001n2 , 1000n2 + 100n + 20 all are considered O(n2 ). We know it makes little
sense that 1000n2 and 0.0001n2 have same in aymptotic notation. But, to make the
notation universally applicable to all algorithms and functions and also to ignore the de-
tails of machine and language specifications, those two functions have same asymptotic
notation.

Worked Out Exercise 1


Show that f (n) = 3n2 − 100n + 6 = O(n2 ).
Proof :
f (n) = 3n2 − 100n + 6 = O(n2 ) if and only if there exists constants c and n for which
0 ≤ f (n) ≤ c.g(n).
let g(n) = n2 , c = 3, so c.g(n) = 3.g(n) = 3n2 which is always > 3n2 − 100n + 6 for
any n ≥ 1. And for any n ≥ 34, f (n) ≥ 0. So, finally we can say for constants c = 3
and n ≥ 34, 0 ≤ f (n) ≤ c.g(n). Meaning f (n) = O(n2 ).

O(n) gives us the upper bound of a function. Meaning, it gives the worst case running
time complexity of an algorithm. From here on, we won’t discuss any other notation in
details, because you have already get the idea.

1.3 Big Omega Notation Ω(n)


Simple definition,

f (n) = Ω(g(n)), means there exists positive constants c and n0


for which f (n) is always ≥ c.g(n) , for large enough n(n ≥ n0 )

Set definition,

O(g(n)) = {f (n) : there exits constants c and n0 such that


0 ≤ c.g(n) ≤ f (n) for large enough n(n ≥ n0 )}

It means, there is a function f (n) belongs to the set Ω(g(n)) if there exists positive
constant c(c > 0) for all sufficiently large n(n ≥ n0 ). As Ω(g(n)) is a set, we can write
f (n) ∈ Ω(g(n)).

Ω(n) gives the lower bound of a function. Meaning it gives the best case running time
complexity of an algorithm.
1.4. BIG THETA NOTATION Θ(N ) 5

Worked Out Exercise 2


Show that f (n) = 3n2 − 100n + 6 = Ω(n2 ).
Proof :
f (n) = 3n2 − 100n + 6 = Ω(n2 ) if and only if there exists constants c and n for which
0 ≤ c.g(n) ≤ f (n).
let g(n) = n2 , c = 1, so c.g(n) = 1.g(n) = n2 which is always < 3n2 − 100n + 6 for
any n ≥ 34. So, finally we can say for constants c = 1 and n ≥ 34,
0 ≤ c.g(n) ≤ f (n). Meaning f (n) = Ω(n2 ).

1.4 Big Theta Notation Θ(n)


Simple definition,

f (n) = Θ(g(n)), means there exists positive constants c1 , c2 and n0


for which f (n) is always ≥ c1 .g(n) and f (n) is always ≤ c2 .g(n) ,
for large enough n(n ≥ n0 )

Set definition,

O(g(n)) = {f (n) : there exits constants c1 , c2 and n0 such that


0 ≤ c1 .g(n) ≤ f (n) ≤ c2 .g(n) for large enough n(n ≥ n0 )}

It means, there is a function f (n) belongs to the set Θ(g(n)) if there exists positive
constants c1 and c2 (c1 > 0, c2 > 0) for all sufficiently large n (n ≥ n0 ). As Θ(g(n)) is a
set, we can write f (n) ∈ Θ(g(n)).

Θ(n) gives tight upper bound and lower bound of a function. Meaning it gives the
average case running time complexity of an algorithm.

Worked Out Exercise 3


Show that f (n) = 3n2 − 100n + 6 = Θ(n2 ).
Proof 1 :
As from previous worked out exercises , we can see f (n) = O(n2 ) and f (n) =
Ω(n2 ). Meaning f (n) = Θ(n2 ). (proof finishes here)
Because, O(n2 ) gives an upper bound for f (n) and Θ(n2 ) gives us the lower bound.
For Θ(n2 ), we need both an upper bound and also a lower bound. That’s why we can
conclude saying f (n) = Θ(n2 ).
6 CHAPTER 1. ASYMPTOTIC NOTATION

We can also prove above problem this way,

Proof 2 :
f (n) = 3n2 − 100n + 6 = Θ(n2 ) if and only if there exists constants c1 , c2 and n for
which 0 ≤ c1 .g(n) ≤ f (n) ≤ c2 .g(n).
let g(n) = n2 , c1 = 1, c2 = 3, so c1 .g(n) = 1.g(n) = n2 which is always <
3n2 −100n+6 for any n ≥ 34 and c2 .g(n) = 3.g(n) = 3.n2 which is always > 3n2 −100n+6
for any n ≥ 1. So, finally we can say, for constants c1 = 1 and c2 = 3 and n ≥ 34,
0 ≤ c1 .g(n) ≤ f (n) ≤ c2 .g(n). Meaning f (n) = Θ(n2 ).

There are so much more to discuss which I won’t be doing. So, you have to read
books and explore from web to gather more insights on asymptotic notation.

Exercises
1. Show that f (n) = 3n2 − 100n + 6 = O(n3 ).

2. Show that f (n) = 3n2 − 100n + 6 = Ω(n).

You might also like