0% found this document useful (0 votes)
32 views52 pages

Asymptotic Notations

The document provides a comprehensive overview of asymptotic notations, which are mathematical tools used to describe the growth rates and efficiency of algorithms. It covers the definitions of algorithms, the importance of asymptotic analysis in evaluating algorithm performance, and various notations such as O, Ω, and Θ for expressing complexity. Additionally, it includes examples of algorithms and their complexities, emphasizing the significance of understanding algorithm efficiency for effective problem-solving.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views52 pages

Asymptotic Notations

The document provides a comprehensive overview of asymptotic notations, which are mathematical tools used to describe the growth rates and efficiency of algorithms. It covers the definitions of algorithms, the importance of asymptotic analysis in evaluating algorithm performance, and various notations such as O, Ω, and Θ for expressing complexity. Additionally, it includes examples of algorithms and their complexities, emphasizing the significance of understanding algorithm efficiency for effective problem-solving.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Asymptotic Notations

Paper Name: Discrete Structures


Lesson: Asymptotic Notations
Lesson Developer : Jyotsna Talreja
College/ Department: AND College, University of Delhi

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Table of Contents

1. An Introduction to Asymptotic
 1.1: Introduction
 1.2: What is an Algorithm?
 1.3: Asymptotic Analysis: Why we need it?
 1.3.1 Asymptotic Analysis: Role in Algorithm Efficiency Analysis
 1.3.2 Asymptotic Analysis: Role in Algorithm Comparison
 1.4:Asymtotic Growth Rates
 1.5:A Simple Example : Analysis of Algorithms

2. Asymptotic Tight Bounds


 2.1: Introduction
 2.2: Asymptotic Tight Bounds: Mathematical Views
 2.2.1 O-notation
 2.2.2 Ω -notation
 2.2.3  - notation
3. Asymptotic Loose Bounds & Program Complexities
 3.1: Introduction
 3.2: Asymptotic Non-tight Bounds: Mathematical Views
 2.2.1 o-notation
 2.2.2  -notation
 3.3: Properties of famous asymptotic Notations
 3.4: Programming Constructs and related Complexities

4. Searching and Sorting Complexities


 4.1: Introduction
 4.2: Complexities of some fundamental Algorithms
 4.2.1 Search Algorithms
 4.2.1.1 Linear Search Algorithm
 4.2.1.2 Binary Search Algorithm
 4.2.2 Sorting Algorithms
 4.2.2.1 Bubble Sort Algorithm
 4.2.2.2 Insertion Sort Algorithm
 Summary
 Exercises
 Glossary
 References

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

1. An Introduction to Asymptotic Notations

1.1 Introduction
Asymptotic notations are mathematical way of describing the order of growth and limiting
behavior of a function when the argument to the function tends towards a particular value
or infinity. Asymptotic notation simplifies functions in order to concentrate on their growth
rates. We can also map, the time or space taken by an algorithm in terms of a
mathematical function of input size n for large n. Asymptotic notations can be used to
measure running time of an algorithm as well as storage memory requirements by the
algorithm, i.e. time complexity and space complexity measures of an algorithm. The
notation works well to compare algorithm efficiencies because the growth of rate of a given
algorithm approximates the value of a standard function.

Value addition: Historical/Intellectual Context

Heading text: The idea behind asymptotic notation


Body text: Asymptotic relates to an asymptote, which is defined as “A line whose
distance to a given curve tends toward zero”. So we say that something asymptotic refers
to a limiting behavior based on a variable and a desired measure. We use the number of
steps that it takes for an algorithm that works with N items to complete and derive an
asymptotic bound on the time complexity by increasing N toward infinity. We’re finding out
how much longer the algorithm takes when we keep on adding elements. We could actually
test this by writing the algorithm, profiling it to see how long it takes for N, then profiling it
again after doubling N. The time difference is a rough estimate of the growth. This is called
an empirical test. However, we can also do a theoretical test by measuring the steps that
rely on the size of N and get a reasonably useful measure of how the time complexity
grows. Because the steps that don't rely on N won't grow, we can remove them from the
measure because at a certain point, they become as small as to be worthless. In other
words, we pretend that they don't matter in all cases. This is the idea behind asymptotic
notation.
Source: http://www.eternallyconfuzzled.com/arts/jsw_art_bigo.aspx

We just used a term called algorithm. Let’s first see and understand what an algorithm is, in
the next section. Then we go in to the details of how asymptotic notations relate to the
notion of algorithm.

1.2 What is an algorithm?


An algorithm is a step-by-step procedure for solving a given problem in a finite amount of
time. An algorithm takes the input, solves the problem (function) and transforms input to
the output which is solution of that problem.

Figure 1.1 Algorithms: Visualization

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

ALGORITHM
INPUT OUTPUT
(Method)
Value addition: Historical/Intellectual Context

Heading text: Evolution of word Algorithm

Body text:
The word algorithm comes from the name of the 9th century Persian Muslim
mathematician Abu Abdullah Muhammad ibn Musa Al-Khwarizmi. The word
algorism originally referred only to the rules of performing arithmetic using Hindu-Arabic
numerals but evolved via European Latin translation of Al-Khwarizmi's name
into algorithm by the 18th century. The use of the word evolved to include all definite
procedures for solving problems or performing tasks.
Source: http://www.scriptol.com/programming/algorithm-history.php

Value addition: Historical/Intellectual Context


Heading text: Algorithms by the ancients

Body text:
1.Euclid has created an algorithm that has been given its name.
The algorithm serves to calculate the greatest common divisor, here it is:

- divide the number a by b, the remainder is r


- replace a by b
- replace b by r
- continue until a can’t be more divided. In this case, a is the gcd.

2.The algorithm of Archimedes gives an approximation of the Pi number.

3.Eratosthenes has defined an algorithm for retrieving prime numbers.

4. Averroès (1126-1198) was using algorithmic methods for calculations.

Source: http://www.scriptol.com/programming/algorithm-history.php

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

An algorithm must:

• Be composed of a series of finite and concrete steps that solves the problem.
• Have no ambiguity as to which step will be performed next.
• Terminate.

A computer program is an instance and representation, for an algorithm in some


programming language.

A given problem can have many algorithms (methods) to solve it. But it’s our responsibility
to choose the best method. Only writing a working program is not good enough. The
program may be inefficient. If the program is run on a large data set, then the running time
becomes an issue. For this we need to do analysis of an algorithm.

Value addition: Concept Maps

Heading text: Keep in Mind the characteristics of algorithms


Body text:
Properties of algorithms are as follows:
• Input from a specified set,
• Output from a specified set (solution),
• Definiteness of every step in the computation,
• Correctness of output for every possible input,
• Finiteness of the number of calculation steps,
• Effectiveness of each calculation step and
• Generality for a class of problems.

Source: Kenneth Rosen, Discrete Mathematics and Its Applications, Sixth Edition.

Value addition: Activity


Heading text: Let’s develop algorithms
Body text: We will now see some practical examples of algorithms. We know
that algorithms are steps designed for solving a problem. The set of integers plays a
fundamental role in discrete mathematics. Let’s take simple examples of integer sets and
try to develop an algorithm for them.
Problem: Finding whether a given integer is even or odd.

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Input: Number N
Steps:
If the modulo of the input number N with two (i.e. N % 2) is 1,
Then the input number is odd.
Otherwise it is even.
Output: No is even or odd.

Problem: Finding Maximum/Minimum Value in an Array (List of numbers).

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

We have an array with a list of numbers, and our goal is to find the maximum/minimum value from
that list of numbers. (Let’s focus only on the maximum case. The minimum case is simply the
opposite.)

Following is a more detailed form of our array with index positions added:

A = {11 , 5, 73, -1, 400, 80, 101}

0 1 2 3 4 5 6  indexes

So now, our job is to start off at the beginning and find out what the maximum value is.
We set the number in the zero-th position as a temporary maximum.
In other words, the 11 at position 0 is the current maximum number. We then move to
the next number position 1, and ask, "Is 5 greater than the current maximum, 11?" The
answer is no. 5 is not greater than 11, so 11 remains our current, local maximum.
Then we move to index position 2. At position 2, the value in our array is a 73. Again we
ask the similar question, "Is 73 greater than the current maximum, 11?” The answer is
yes. 73, is greater than 11, so now our new maximum is 73.
We will keep progressing through each index position and asking whether the number at
this position is greater than the current maximum. If the number is greater, then we
replace our old maximum value with the new number we are at. If the number is not
greater, we proceed to the next number and ask if that number is greater. We keep
going until we reach the end of array.
When we reach the final value in array, we declare that whatever maximum value we
have after last value, is the maximum value of our entire array.

Algorithm for finding the Maximum Value in an Array:

Input: List of N integers => (al, a2 . . . , an : integers)


Steps:
Current Maximum Value: = al
For i: = 2 to n
If Current Maximum < ai then Current Maximum Value : = ai
Current Maximum Value is the largest element
Output: Largest Element

For finding the minimum, instead of asking, "Is this number greater than the current
maximum?” we would ask, "Is this number less than the current minimum?"

The algorithm described above are in a form, called High-level description of an algorithm
i.e. Pseudo code. It is more structured than English prose but less detailed than a program
written in a programming language. It is preferred notation for describing algorithms as it
hides program design issues

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

1.3 Asymptotic Analysis: Why we need it?


Asymptotically analysis is measure of the execution of an algorithm, usually the time or
memory needed, given the problem size n, which is usually the number of items. Informally,
saying some equation f (n) g (n) means it grows at the same rate as g(n). More formally,
it means lim x → ∞f(x)/g(x) = 1.

It gives a simple characterization of an algorithm’s performance and analysis. It allows the


comparison of the performances of various algorithms.

1.3.1 Asymptotic Analysis: Role in Algorithm Efficiency Analysis

As discussed, algorithm analysis is a process of determining the amount of time, memory


resources, etc. required when executing an algorithm. For example if an algorithm is taking
n seconds, it can be thought of “of the order of n”, as unit is constant. For the analysis of
algorithms, asymptotic analysis is a method of describing the limiting behavior.

The simplest example is when we have a function f (n) and wish to describe its properties
when n becomes very large. Thus, if f (n) = n2+7n, the term 7n becomes insignificant
compared to n2 when n is very large. We say that "f (n) is asymptotically equivalent to n2 as
n → ∞" and write f (n) ~ n2.

Asymptotic analysis gives us a measure of algorithm efficiency. The measure of efficiencies


is as follows:

Space complexity: represents the amount of storage space needed to solve the given
problem.

Time complexity: represents the amount of time needed to solve the given problem i.e.
the number of steps/basic operations needed as a function of the problem size.

The input size depends on the problem being studied, but in most cases it is the number of
items in the input.

Example of input items –

 The number of elements in the input (e.g. sorting)


 The number of bits needed to represent the input (e.g. integer multiplication)
 The number of nodes in a graph

The performance evaluation of an algorithm is measured by summing and totaling the


number of occurrences of each operation when running the algorithm. The running time of
an algorithm on a particular input, is the number of primitive operations executed. We can
determine the maximum number of primitive operations executed by an algorithm, as a
function of the input size. This is the basis of calculating complexity.

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Let’s consider an algorithm with a functional equation:

y = m*x + c

There are 3 primitive operations: *, +, =.

Let cost (time taken by primitive operation) of each operation is c1, and then total cost is
3.c1.

Each simple operation takes constant time. Example: arithmetic (add, subtract, multiply,
divide, assignment etc) data movement (load, store) and control (conditional and
unconditional branch).Loops and subroutine calls do not take constant time.

Many times we do not aim at obtaining an exact step count. Instead, we try only to get
asymptotic bounds on the step count. Asymptotic analysis makes use of the O (Big Oh)
notation. Other notational constructs used in analysis of algorithms are Θ (Big Theta)
notation, Ω (Big Omega) notation etc. We will soon see these notations in upcoming
lectures.

The complexity of an algorithm is expressed in terms of a function g(n) that gives the
upper/lower bound on the number of operation (or running time) performed by an algorithm
when the input size is n.

There are following interpretations of asymptotic bounds with respect to algorithms.

Worst-case Complexity: The worst-case complexity (in asymptotic notation) measures


the resources (e.g. running time, memory) required by an algorithm in the worst-case. It
basically gives an upper bound on the resources required by the algorithm. The worst-case
time-complexity indicates the longest running time performed by an algorithm given any
input of size n. The order of growth of the worst-case complexity is used to compare the
efficiency of two algorithms.

Average-case Complexity: The average-case complexity measures the average running


time for any given size input. It basically depicts the average number of operations over all
problem instances for a given size.

Best-case Complexity: The best case for an algorithm is the arrangement of input data for
which this algorithm performs best .It is the case where least numbers of resources (e.g.
running time, memory) are required by an algorithm.

1.3.2 Asymptotic Analysis: Role in Algorithm Comparison

To compare the efficiency of algorithms, the most common way of qualifying an algorithm is
the asymptotic notations.

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Value addition: Activity


Heading text: Let’s develop different algorithms for a given
problem.

Body text: Let’s consider another problem of finding kth largest element in list of
given N numbers. We try to generate sequence of steps to solve the given problem.

Algorithm I

Input: Given N numbers.

Steps:

(1) Read N numbers into an array.

(2) Sort the given array in decreasing order.

(3) Return the element in position k.

Output: kth largest element.

Is there any other way by which the given problem can be solved? We can definitely
explore more ways. Let’s see another way of solving the same problem.

Algorithm II

Input: Given N numbers.

Steps:

(1) Read the first k elements into an array and we sort them in decreasing order.

(2) Then each remaining element is considered and read one by one.

If the element is smaller than the kth element, then it is ignored.

Else, it is placed in its correct place in the array, moving one element out of the array.

(3) Element in the kth position is returned.

Output: kth largest element.

Thus we observe that we can have different algorithms for the same problem set. But
which one will be efficient will depend on values of N and k.

Consider two algorithms, A1 and A2, for solving a given problem. Let running times of
each of the algorithm be TA1 (n) and TA2 (n) respectively where n is input size. It would be
simple to compare two functions TA1 (n) and TA2 (n) to determine which algorithm is more
efficient if we use asymptotic notations.

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

If input size is n, then an algorithm A1 can take 100n steps, or A2 takes n2 steps, which one
is better? We will see that for smaller range of numbers [1,100], n2 is better running time, but for
larger range [101, ∞], the first 100 n is a better option.

Figure 1.2 Comparative Studies of Algorithms

Input Algorithm A1 Algorithm A2


N 100 n n2
10 1000 100
70 7000 4900
90 9000 8100
100 10000 10000
110 11000 12100
1000 100000 1000000
10000 1000000 100000000

The efficiency of an algorithm becomes an important point to consider in practical ways


when the input size is really large.

Thus in order to compare the efficiency of the two algorithms, we need to compare their time
complexity for asymptotically large value of n. Hence, an algorithm with running time
TA1 (n) is more efficient than another algorithm with running time TA2 (n) if and only if, beyond
some value of n, TA2 (n) is more than TA1 (n) and the ratio TA2 (n) /TA1 (n) increases with
increase in the value of n.

1.4 Asymptotic Growth Rates


We aim at categorizing algorithms based on their asymptotic growth rate. We estimate
upper bound on growth rate of time complexity function. We ignore small constants of
proportionality and small inputs. Let’s say time complexity of an algorithm is
44n2 + 47n + 89.Constants coefficients like 44, 46, and 89 are defined at abstract level of
implementation of the algorithm. The constants are not the universal constants in the time
complexity of an algorithm. So these constants are not important. We can say that an
algorithm with running time n2 is not efficient than an algorithm with running time 44 n2 +
47n + 89 as n2 is a dominant term. Naming conventions followed for different growth rate
functions are shown in figure 1.3.

Figure 1.3 Different Growth Rates and Naming Conventions.

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Function Name

C Constant

Log N Logarithmic

N Linear

N2 Quadratic

N3 Cubic

2n Exponential

Value addition: Concept map

Heading text: Comparing Growth Rates: A depiction with functions


of n
Body text:

N 1 Log N N N2 N3 2n
1 1 0.00 1 1 1 2
10 1 3.32 10 10 1000 1024
30
100 1 6.64 100 10000 1000000 1.2 X 10

301
1000 1 9.97 1000 1000000 1000000000 1.1 X10

1.5 A Simple Example for Analysis of Algorithms

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Let’s consider following algorithms:

Input: a, b
Output: c, i.e. sum of input variables

Start
a = 100 I
b = 200 II
c = a+b III
Print c IV
End

Approximate, how long does this algorithm take to execute? Considering a typical line of
code, let’s say an assignment statement such as a = 1, addition operation and printing a
value, the time line of code takes to execute is some constant, C. Then the above algorithm
will take approximately C+C+2C+C = 5 C.
How did we come up with this number? It’s this simple – there are 4 lines of code. Every
line of code in the above algorithm takes some constant, C except III line which takes 2C as
two primitive operations (addition and assignment), therefore time to execute –add up to
5C’s.
But there may be case where we don’t know exactly how many lines of code get executed.
For example, algorithms having looping constructs like for loop, while loops etc.

Input: a, b
Output: c, i.e. sum of input variables displayed n number of times
Start
x=0 I
While x < n, do the following steps: II
a = 100 III
b = 200 IV
c=a+b V
Print c VI
x = x+1 V II
End the while VIII
End the program

In this case, we don’t know how many times the while loop will execute, because we don’t
know the value of n. We can estimate that the loop execution time is: 7C * n (i.e. Nested
lines of code: III and IV and VI execute C times each and V and VII both 2C times each).I
and VIII takes C time each. Overall algorithm execution time: 2C + 7Cn.

Asymptotic notations can conveniently be used for describing running time and analysis of
algorithms. We will introduce notations in upcoming lectures.

Value addition: Activity

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Heading text: Mind Boggling: Input size and Time factors for an
algorithm

Body text:
Suppose, an algorithm performs like, f(n) = 2n. In T seconds, you can process k inputs.
If you get a computer 26 times faster, how many inputs can you process in T seconds?
Let’s see a mathematical solution.
We are given 2k/T operations per second.
Now (2k * 26) /T operations per second are to be performed. Let’s say Input size (No:of
inputs) be, S that can be processed now in T seconds is as follows:
2S / (27 k/T) operations per second = T seconds
Therefore, 2S = 27 k => S = 26 k => S = 64 k
Thus input’s S that can be processed in T seconds is 64 k as expressed in terms of k.

2. Asymptotic Tight Bounds

2.1 Introduction
Asymptotic analysis of algorithms makes use of different asymptotic notations which
represent asymptotic upper and lower bounds that may or may not be asymptotically tight.
The growth of functions is often described using these special notations. Now we will
introduce several types of asymptotic notations which are used to compare the performance
and efficiency of algorithms. These notations are as follows:

 -Notation: Asymptotic tight bound

O -Notation: Asymptotic upper bound

Ω -Notation: Asymptotic lower bound

o –Notation: upper bound that is not asymptotically tight

 -Notation: lower bound that is not asymptotically tight.

In this chapter we will learn about asymptotic tight bound notations and their usage.

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

2.2 Asymptotic Tight Bounds: Mathematical Views

2.2.1 O-notation

Let us consider two functions g (n) and f (n) mapped from the set of real numbers to real
numbers, we define O (g (n)), big-Oh notation, as the set:

O (g (n)) = {f (n): there exists positive constants c and n0, such that n  n0, we
have 0  f (n)  c g (n)  n > n0.}

It is set of all functions whose rate of growth is the same as or lower than that of g (n).It is
also called asymptotic upper bound.

What does it mean?

If f (n) = O (n2), then:

Choose some constant c and some value n0 such that for every value of n larger than n0,f
(n)  cn2 i.e. for values larger than n0, f(n) is never more than a constant multiplier greater
than n2 that implies f(n) does not grow more than a constant factor faster than n2.

If f (n) is O (n2), is it also O (n3)?

Answer is “Yes”. n3 grows faster than n2, so n3 grows also faster than f(n).

Therefore, we always try to find the smallest simple function g(x) for which f(x) is O
(g(x)).

Value addition: Historical/Intellectual Context


Heading text: Who introduced Big – O?
Sub- Sub- Topic (Title) (Black, font size 12 and bold)

Body Text: Big-O notation was introduced in P. Bachmann’s 1892 book Analytische
Zahlentheorie. He used it to say things like “x is O (n/2)” instead of “x ~ n/2.” The
notation works well to compare algorithm efficiencies because we want to say that the
growth of effort of a given algorithm approximates the shape of a standard function.
The big-O symbol is sometimes called a Landau symbol after the German mathematician
Edmund Landau, who used this notation throughout his work. The use of big-O notation
in computer science was popularized by Donald Knuth, who later on also introduced the
big-  and big- notations.

Source: http://en.wikipedia.org/wiki/Big_O_notation

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Figure 2.1 Graphical display of O notation

Value addition: Interesting fact

Heading text: Big Oh of Linear function

Body Text: Any linear function an + b is in O (n).

Value addition: Common Misconceptions


Heading text: f(n) is O (g(n)) implies that f(n) belongs to g (n)
Body Text:

f (n) is O (g (n)) is sometimes written as f (n) = O (g (n)). This use of the equals sign is
an abuse of notation, as the above statement is not an equation. However, it is
acceptable to write f (n) E O (g (n)) i.e. f (n) belongs to O (g(n)).

It is improper to conclude from g (n) = O (f(n)) and h(n) = O(f(n)) that g(n) and h(n)
are equal. One way to think of this is to consider "= O" one symbol here. This notation
tells us that an inequality holds relating the values of the functions f and g for sufficiently
large numbers.

Source:http://www.statemaster.com/encyclopedia/Big-O-
notation#Equals_or_member-of_and_other_notational_anomalies

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Examples:

1. The function 3n + 7 is O (n).

3n + 7  c n

 (c 3) n  7

 n  7/(c  3)

Therefore c = 4 and n0 = 7

Thus we verified function 3n + 7 is O (n).

2. The function 7n2 is not O (n).

7n2  c n

 n  c/7

The above inequality cannot be satisfied since c/7 must be a constant but n is not
fixed.

Thus we verified function 7n2 is not O (n).Also, we talk of larger values for
asymptotic notation where n is being limited on upper side.

3. The function n2 + 2n + 1 is O (n2).

For n > 1 we have:

n2 + 2n + 1  n2 + 2n2 + n2

 n2 + 2n + 1  4n2

Therefore, c = 4 and n0 = 1.

Thus we verified function n2 + 2n + 1 is O (n2).

4. The function lg(n!) = O(n lg n)

n! = n(n-1)(n-2)……………………321

>= (n/2)n/2

Taking log on both the sides

lg n! >= n/2 lg n/2

=n/2 (lg n – lg22)

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

=n/2(lgn – 1)

<=nlgn

=O(nlgn)

5. The function 3n+1 = O(3n)

3.3n <= c3n

=> c>= 3 and we consider n0 >0

Thus 3n+1 = O(3n).

Value addition: Interesting fact

Heading text: Prove that for a polynomial f (n)= am nm + am-1nm-1


+ … + a0, where a0, a1… am are real numbers, f (n) is O(nm).

Body Text: We examine the asymptotic behavior of polynomials in n. In particular,


we will see that as n gets large, the term involving the highest power of n will dominate
all the others. Therefore, the asymptotic behavior is determined by that term.

Consider a polynomial in n of the form

Then prove that f (n) = O (nm).

Proof: Each of the terms in the summation is of the form ai ni. Since n is non-negative,
a particular term will be negative only if ai < 0. Hence, for each term in the summation,

Recall too that we have stipulated that the coefficient of the largest power of n is
positive, i.e. ai > 0

f (n) <= | am | nm + | am-1 | nm-1 +……………………………+ | a1 | n1 +| a0 |n0

<= nm (| am | + | am-1 |/ n +……………………………+ | a1 |/ nm-1 +| a0 |/nm )

<= nm (| am | + | am-1 | +……………………..........+ | a1 | + | a0 |) for n = 1

So for n>= 1,

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

f (n) <= nm (| am | + | am-1 | +……………………..........+ | a1 | + | a0 |) where

Thus f (n)  c nm  n > n0where n0 = 1 and C = (| am | + | am-1 |


+……………………..........+ | a1 | + | a0 |)

Hence verified f (n) = O (nm).


This property of the asymptotic behavior of polynomials is used extensively. In fact,
whenever we have a function, which is a polynomial in n, f (n) = an nn + an-1nn-1 + … +
a0, we will immediately ignore the less significant terms (i.e., terms involving powers of
n which are less than m), as well as the leading coefficient, am to write f (n) = O (nm).

Value addition: Interesting fact

Heading text: Let’s order the growth rates

Body Text: Big-O notation is used to estimate the number of operations needed to
solve a problem using a specified procedure or algorithm. Following holds for some of
the most common functions used for analysis of algorithms:

O (1) < O (log n) < O (n) < O (n log n) < O (n 2) < O (2n)

The following general rules apply:

1. Logarithmic is better than polynomials.

2. lower-degree polynomial is better than higher-degree polynomial.

3. Polynomial is better than exponential

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Source :
http://media.photobucket.com/image/big%20O%20graph/amit_9b/BIGO_GRAPH.jpg

2.2.2 Ω notation

Let us consider two functions g (n) and f (n), we define  (g (n)), big-omega notation, as
the set:

 (g (n)) = {f (n): there exists a positive constants c and n0, such that n  n0,
we have 0  c g (n)  f (n)  n > n0 }

What does it mean?


We consider a function f (n) which is non-negative for all integers n>= 0. We say that f
(n) is omega g (n), if there exists an integer n0 and a constant c>0 such that for all integers
n> n0 and c g (n)  f (n).

The definition of omega is almost identical to that of big oh. The only difference is in the
comparison, for big oh it is f (n)  c g (n) but for omega, it is c g (n)  f (n).

If f =  (g), then f is at least as big as g (or g is a lower bound for f).

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Figure 2.2 Graphical display of  notation.

Examples:

1. The function f (n) = n2 + 2n + 1 is  (n2)

i.e. c n2  n2 + 2n + 1 (Definition of  (n2))

Let’s say we choose c = 1, a positive constant.

 n2  n2 + 2n + 1

 0 2n + 1

 This hold true for n  0. Therefore n0 = 0.

Thus we verified function f (n) = n2 + 2n + 1 is  (n2).

2. The function f (n) = n2 is  (n).

i.e. c n  n2

Let’s say we choose c = 1, a positive constant

 0  n2 - n
 0  n (n-1)
 1 n

 This hold true for n  0. Therefore n0 = 1.

Thus we verified function f (n) = n2 is  (n).

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Value addition: FAQ’s

Heading text: Let’s explore relation between O and 


Body Text:
a) If f (n) ∈ O (g (n)), is g (n) ∈  (f (n))?
b) If f (n) ∈  (g (n)), is g (n) ∈ O (f (n))?
Answers
The answer is yes. If f(n) ∈ O(g(n)), then there exists a constant c and n0such that, for all n
≥n0,

f (n) ≤ cg(n) which is equivalent to

g (n) ≥ 1/c f(n).

Hence the constants n0 and c’ such that c’f (n)  g (n) where c’ = 1/c exist and therefore
g(n) is  (f(n)).

The answer is yes again, If f (n) ∈  (g(n)),then there exists a constant c and n0such that,
for all n ≥n0,

f (n) ≥ cg(n) or equivalently

g (n) ≤1/c f(n).

Hence the constants n0and c’ such that c’f (n) ≥ g (n) where c’ = 1/c exist and therefore g
(n) is O (f(n)).

2.2.3 -notation

Again we consider two functions g (n) and f (n), we define (g (n)), big-Theta, as the set:

(g (n)) = { f(n) : such that there exists positive constants c1,c2 and n0, such that
n  n0, we have 0  c1g(n)  f(n)  c2g(n) }

Function f (n) is sandwiched between c1 g(n) and c2g(n) for sufficiently large n. Big-Theta
means the bound is the tightest bound possible. We write

f (n) = (g(n))  n > n0 where f(n) and (g(n)) are positively asymptotic.

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Figure 2.3 Graphical display of  notation

Examples:

1. The function f (n) = 4n + 3 is  (n).

0  c1 g (n)  f (n)  c2 g (n)

 c1 n  4n + 3  c2 n  n > n0
Let’s say we choose c1 = 1, c2= 5, positive constants
 n 4n + 3 5n
Considering above, two inequalities we get following equations:
3n+3 > 0 and 3  n
 n = -1 and n >= 3 Therefore n0 = 3.

Thus we verified function f (n) = 4n + 3 is  (n).

2. The function f (n) = 4n2 is not  (n).

0  c1 g (n)  f (n)  c2 g (n)

 c1 n  4n2  c2 n  n > n0
 c1  4n  c2  n > n0
Considering above, two inequalities we get following equation:
c1 /4  n  c2 /4,
This cannot possibly hold for arbitrarily large n.

Thus we verified f (n) = 4n2 is not  (n).

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Value addition: Analogy


Heading text: Comparison of functions and Real Numbers
fg  wt

Body Text:

f n   Og n   wt
f n   g n   wt
f n   g n   wt

Big-O bound is only an upper bound. So an algorithm that is O (n3) might not ever take that
much time. It may run in O (n) time.

Big-omega bound is only a lower bound. Big  bound is precise. So, if an algorithm is
 (n2), it runs in quadratic time.

Value addition: An Interesting Relation 


Heading text: Relations between ,  , O

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Body Text: We will prove for any two functions g(n) and f(n),
f(n) = (g(n)) iff f(n) = O(g(n)) and f(n) = (g(n)).

In practice, asymptotically tight bounds are obtained from asymptotic upper and lower
bounds, i.e. (g (n)) = O (g (n)) intersection  (g(n)).

Considering R.H.S first,

O (g (n)) = {f (n): there exists a positive constants c1 and n0, such that n  n0, we
have 0  f (n)  c1 g (n)  n > n0.}

And

 (g (n)) = {f (n): there exists a positive constants c2 and n0, such that n  n0, we
have 0  c2 g (n)  f (n)  n > n0 }

Taking intersection of the two sets, we will get

O (g(n)) intersection  (g(n)) = { f(n) : such that there exists positive constants c1,c2
and n0, such that n  n0, we have 0  c1g(n)  f(n)  c2g(n) } which is actually (g(n))
by definition.

Now considering L.H.S,

(g (n)) = { f(n) : such that there exists positive constants c1,c2 and n0, such that n 
n0, we have 0  c1g(n)  f(n)  c2g(n) } [from definition]

From the definition we get,

i) f (n)  c2g(n)  n > n0 there exists positive constants,c2 and n0.

This is by definition is O (g (n)).

ii) c1 g(n)  f(n)  n > n0 there exists positive constants c1 and n0.

This is by definition is  (g (n)).

Thus O (g (n)) and  (g (n)) can be inferred from (g (n)).

From above we can see that L.H.S = R.H.S. Hence we proved that f (n) = (g (n)) iff
f (n) = O (g (n)) and f (n) = (g (n)).

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Value addition: FAQ’s


Heading text: Can big-O,  ,  notations be used to estimate the sum of the
first n positive integers?
Body Text: Because each of the integers in the sum of the first n positive integers
does not exceed n, it follows that
2
1+2+...+n<n+n+...+n=n

From this inequality it follows that 1 + 2 + 3 + . . . + n is O (n 2),


taking C = 1 and n0= 1.

Also now,

1+2........ + n >= ceil (n /2) + (ceil (n /2) + 1) + ....... + n

>= ceil (n /2) + ceil (n /2) +....... +ceil (n /2)

>= (n – ceil (n/2) + 1) ceil (n/2)

>= (n/2) (n/2)

= n2 /4.

This shows that f (n) is  (n 2). We conclude that f (n) is of order n 2, or in symbols,
f (n) is  (n 2).

Source: Kenneth Rosen, Discrete Mathematics and Its Applications, Sixth Edition

3 Asymptotic Loose Bouds and Program complexities

3.1 Introduction

In this chapter we will introduce loose asymptotic bounds. Also we will see some properties
of asymptotic bounds. In the later half we will see different programming constructs and
how to associate them with asymptotic notations for calculating their complexities. Thus this
chapter explores more relatives of Big Oh notation, interesting properties and application of
asymptotic notation in programming world.

3.2 Asymptotic Non-Tight Bounds


3.2.1 Little o

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Let’s consider two functions f(n) and g(n), the set little-o can be defined as:

o (g(n)) = {f(n):  c > 0,  n0 > 0 such that  n  n0, we have 0  f(n) < cg(n)}.

Function f (n) becomes insignificant relative to g (n) as n approaches infinity. The function g
(n) is an upper bound for f (n) that is not asymptotically tight.

However, for little-oh notation, we need to show that for all c, there exists an n0 such that 0
<= f (n) < c*g (n). So finding a specific c and n0 isn't enough to prove this as was in Big Oh
notation.

The function f(n) becomes insignificant relative to g(n) as n approaches infinity .

 Thus lim [f(n) / g(n)] = 0 => f(n) E o(g(n))


n

We can use it as second definition of little o to prove the mathematical examples.

If the limit as x   of the quotient |f (x) / g (x)| = 0 then f (x) = o (g (x)).


3
Example: Prove that 4x + 5x 2 – 6 = o (x 4
). We compute as follows:
4 x  5x  6
3 2
4/ x  5/ x  6/ x 2 4
lim 4
 lim 0
x  x x  1

There exists a limit value = 0.Hence small o relationship is proved.

3.2.2 Little omega 

Let’s consider two functions f(n) and g(n), the set little-o can be defined as:

 (g(n)) = {f (n):  c > 0,  n 0 > 0 such that  n  n0, we have 0  cg(n) < f(n)}.

The function g (n) is a lower bound for f (n) that is not asymptotically tight.

Here also according to definition we need to show that for all c, there exists an n0 such that
0 <= f (n) < c*g (n).Therefore we again introduce the following definition:

 lim [f(n) / g(n)] =  => f(n) E  (g(n))


n

3
Example: To Prove 4x + 5x 2 – 6 =  (x 2).We verify as follows:

4 x 3  5x 2  6 4x  5  6 / x 2
lim  lim 
x  x2 x  1
There exists a limit value =∞. Hence small  relationship is proved.

You must be thinking could we have applied similar limits formulas to asymptotic tight
bounds. What you think? Can we express them as well in a mathematical formula with
application of limits?

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Answer is definitely yes, as they relate to growth of functions. Look at the following:

 lim [f(n) / g(n)] <  => f(n) E O(g(n))


n

 0 < lim [f(n) / g(n)] <  => f(n) E (g(n))


n

 0 < lim [f(n) / g(n)] => f(n) E Ω(g(n))


n

Value addition: Growth as Factorial

Heading text: Factorial and Loose Asymptotic Notations

Body text: The factorial function f(n) = n! is defined by n!=1.2.3. ....n

Whenever n is a positive integer, and O! = 1. For example,

1! =1, 2! =1.2=2, 3! =1.2.3=6, 4! =1.2.3.4=24 and so on…..

n! = n (n-1)………321
n
=n (1-1/n)……..3/n.2/n.1/n

n n (1 - 1/n) ……..3/n.2/n. 1/n


lim =0
n  nn

Therefore factorial is o (n n).

n n (1 - 1/n) ……..3/n.2/n. 1/n


lim =∞
n  2n

Therefore factorial is  (2n)

Value Addition: ANALOGY


Heading text: Comparison of functions and Real Numbers

Body text: fg  wt

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

f n   og n   wt
f n    g n   wt

Value addition: An Interesting relation yet again!!


Heading text: How little o and omega related to tight bounds???

Body text:

 (g(n)) = Ω (g(n)) - (g(n))


o (g(n)) = O(g(n)) - (g(n))

Value addition: Concept map

Heading text: A quick recap

Body text:

Big-Oh: f(n) is O(g(n)) if f(n) is asymptotically less than or equal to g(n)

Big-Omega: f(n) is (g(n)) if f(n) is asymptotically greater than or equal to g(n)

Big-Theta: f(n) is (g(n)) if f(n) is asymptotically equal to g(n)

Little-oh: f(n) is o(g(n)) if f(n) is asymptotically strictly less than g(n)

Little-omega: f(n) is (g(n)) if f(n) is asymptotically strictly greater than g(n)

3.3 Famous Properties of asymptotic Notations


Discrete Mathematics often deals with numbers. Some famous properties of real numbers
can also be applied to asymptotic notations. Let’s see them.

Transitivity

f(n) = (g(n)) & g(n) = (h(n))  f(n) = (h(n))

f(n) = O(g(n)) & g(n) = O(h(n))  f(n) = O(h(n))

f(n) = (g(n)) & g(n) = (h(n))  f(n) = (h(n))

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

f(n) = o (g(n)) & g(n) = o (h(n))  f(n) = o (h(n))

f(n) = w(g(n)) & g(n) = w(h(n))  f(n) = w(h(n))

Reflexivity

f(n) = (f(n))

f(n) = O(f(n))

f(n) = (f(n))

Symmetry

f(n) = (g(n)) if and only if g(n) = (f(n))

Complementarities

f(n) = O(g(n)) if and only if g(n) = (f(n))

f(n) = o(g(n)) if and only if g(n) = w((f(n))

Monotonicity of Functions

A function f (n) is

 Monotonically increasing if m  n  f(m)  f(n).

 Monotonically decreasing if m  n  f(m)  f(n).

 Strictly increasing if m < n  f(m) < f(n).

 Strictly decreasing if m > n  f(m) > f(n).

Growth of Combinations of Functions

 If f1(x) is O(g1(x)) and f2(x) is O(g2(x)), then (f1 + f2)(x) is O(max(g1(x), g2(x)))

 If f1(x) is O(g(x)) and f2(x) is O(g(x)), then (f1 + f2)(x) is O(g(x)).

 If f1(x) is O(g1(x)) and f2(x) is O(g2(x)), then (f1f2)(x) is O(g1(x) g2(x)).

Value addition: Concept map

Heading text: Maximum of functions

Body text:

Let f(x) and g(x) be functions from a X to Y, both set of real number. Then
max (f(x), g(x)) is the function from X to Y that takes as its value at each point x the
larger of f(x) and g(x).

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

MAX

g (x)

f (x)

Value addition: Interesting facts

Heading text: Some important exponential and logarithmic


Properties

Body Text:
a  b logb a
1
a 1  log c ( ab)  log c a  log c b
a
log b a n  n log b a
(a m ) n  a mn
log c a
a m a n  a m n log b a 
log c b
lim
nb
0 log b (1 / a )   log b a
n  a n
1
 n b  o( a n ) log b a 
log a b
a logb c  c logb a

3.4 Programming Constructs and related Complexities


The time required by an algorithm (written in form of programming code) is proportional to
the number of "primitive basic operations" that it performs. Here are some examples of
basic operations:

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

 Arithmetic operation (e.g. +, *).


 assignment
 Equality test (e.g., a == 0)
 read
 Write (of a primitive type) etc.

a) Sequence of Programming instructions (statements)

Code Statement 1;
Code Statement 2;
...
Code Statement n;

Total time Taken = time (statement 1) + time (statement 2) + ... + time (statement n)

If each statement only involves basic operations as discussed above, then the time for each
statement is constant and the total time is also constant: O (1).

b) If- then- else statements

if (condition)
{
Sequence of coding statements
}
else
{
Sequence of coding statements
}

Either sequence in if condition will execute, or sequence in else condition will execute.
Therefore, the worst-case time is the slower of the two possibilities:
max (time (sequence 1), time (sequence 2)). For example, if sequence 1 is O (1) and
sequence 2 is O (n) the worst-case time for the whole if-then-else statement would be O
(n).

c) For loops

for (x = 0; x < n; x++)


{
Sequence of coding statements
}

The loop executes n times, so the sequence of statements also executes n times. We
assume the statements are O (1), the total time for the ‘for’ loop is n * O (1), which is
O (n).

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

d) Nested loops

Case 1: Number of iterations of the inner loop is independent of the value of the outer
loop's control variable.

for (x = 0; x < N; x++)


{
for (y = 0; y < M; y++)
{
Sequence of coding statements
}
}

The outer for loop executes N times. For each execution of the outer loop, the inner loop
executes M times. Thus, the statements in the inner loop execute a total of N * M times.
Thus, the complexity is O (N * M).

Case 2: Number of iterations of the inner loop depends on the value of the outer loop's
control variable. For example:

for (x = 0; x< N; x++)


{
for (y = i+1; y < N; y++)
{
Sequence of coding statements
}
}
Let's analyse how many iterations that inner loop has as follows:

X’s Value Number of iterations of inner loop


0 N
1 N-1
2 N-2
... ...
N-2 2
N-1 1

So we can see that the total number of times the sequence of statements executes is:
N + N-1 + N-2 + ... + 3 + 2 + 1
= N (N+1)/2
= O (N2).

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

e) While loops

i=0
while (i < n)
{
Sequence of coding statements
i++; }

The loop executes n times, so the sequence of statements also executes n times. Since we
assume the statements are O (1), the total time for the while loop is n *(O (1)), which is
O (n) on a whole.

Value addition: Understanding Complexities Practically


Heading text: Interesting coding examples and complexities
Body text: Let’s explore some interesting examples of for loops.
1. for(int i = 1; i<n; i *= 2)
{
Basic operation statement; // takes O (1)
}

In above example, control variable of for loop increases by multiplicative factor of 2 each
time.
Here, i = 2, 22, 23… 2k-1
So 2k-1  n
 log(2k-1)  log(n)
 k-1  log(n)/log(2)
 k = log(n)/log(2))+1
Therefore the run time is O (log (n)).

2. for(int i = 0; i<n; i += 2)
{
Basic operation statement; // takes O (1)
}

In above example, control variable of for loop increases by 2 each time, but not by a
multiplicative factor of 2. So it’s not log (n).
Here, i = 0, 2, 4, 6, 8 … So the given loop will run for n/2 iterations. Therefore
the run time is O (n).

3. for (i = 0; i < N; i++)


{ for (j = N; j > i; j--)

Basic operation statement; // takes O (1)


}

The number of iterations of the inner loop depends on the value of the index of the outer

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

loop. In this example the inner-loop index is counting down from N to i+1. It is the case
that the inner loop executes N times, then N-1, then N-2, etc, so the total number of
times the innermost "statement" executes is O (N2).

After so much exposure to asymptotic notations, we can now use them for calculating
complexity of an algorithm for a given problem.

Value addition: Which is Efficient?????

Heading text: One Problem: Different Algorithms and


Complexities.

Body text:

Problem definition: To compute maximum difference between any two numbers in the
input list of items.

Algorithm 1

procedure maximumdifference(a1, a2, …, an: integers)

diff := 0

for x:= 1 to n-1

for y := x + 1 to n

if |ax – ay| > diff

then diff := |ax – ay|

Total no: of Comparisons: n-1 + n-2 + n-3 + … + 1

= (n – 1)n/2 = n2/2 – n/2


Time complexity is O (n2).

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Algorithm 2

procedure maximumdifference (a1, a2, …, an: integers)

min := a1

max := a1

for x := 2 to n

if ax< min then min := ax

else if ax > max then max := ax

diff := max - min

Total no: of Comparisons: 2n - 2

Time complexity is O (n).

Which one is Efficient????

Obviously The Algorithm 2 is efficient!!!!! As Linear Complexity.

4. Searching and Sorting Complexities

4.1 Introduction
In this chapter we explore applying asymptotic notations in measuring the efficiency of
some famous algorithms. It’s an important question that “How efficient an algorithm is?”
Efficiency focuses on resources like CPU (time), memory, network and disk usage. Our main
concern is measuring the resource requirements. Depending upon the machine/compiler as
well as coding structure of an algorithm, we can say performance of an algorithm is how
many actual resources being used. But how do these resource requirements scales with
increasing problem/input size is basically, Complexity of an algorithm i.e. how the number
of operations relates to the problem size.

4.2 Complexities of some fundamental Algorithms


Here in this chapter, we look for some simple algorithms concerning searching and sorting
of data items in discrete mathematics and will do their analysis.
 Searching is the process of finding an item with specified properties among a
collection of items.
 Sorting is the process of arranging a list of items in a well-defined order, for
example in numeric order or alphabetizes a list.

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

4.2.1 Search Algorithms

Problem: Given a list of items a0, a1... an-1 and an item K that is to be searched in the list,
return an index value i such that ai == K; if K is not in the list, return “not found”.

4.2.1.1 Linear Search Algorithm


This algorithm scans the input list and compares the item to be searched with every item
present in the list. If a match is found, return true else it returns false.

Pseudo-Code:
Linear Search (List, ItemValue)
Begin
Loc = 0; //initialize Loc
While ((List [Loc]! = ItemValue) AND (Loc < List. Length))
Loc = Loc + 1;
If Loc<=List Length, return Loc;
Else return -1;
End

Value addition: Animation


Heading Text: How Linear Search works???
Body Text: Story Board : We start with input List 5 3 11 10 9 15
When element comes for a match it should be animated. Also the text provided in fig.

5 3 11 10 9 15

We try to search 11 in the list.

5 3 11 10 9 15

Does not match!!

11

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

5 3 11 10 9 15

Does not match!!

11

5 3 11 9 15

11

Hurray......MATCH!! Element found

Assume List Length to be N.


• Best case: Occurs when ItemValue to be searched is found at the first position when
examined in the given list. Therefore number of comparisons is 1.Thus O (1).
• Worst case: Occurs when ItemValue to be searched is found at the last position or
it is not found at all in the list. Therefore number of comparisons is N. Thus O (N).
• Average case: We assume that there’s a 50% probability that ItemValue to be
searched is in the list. If ItemValue to be searched is in the list, it has an equally likely
chance of being at any of the N locations i.e. 1/N chance ItemValue to be searched is at a
particular location of the list. We can estimate that around 1 comparison when ItemValue is
close to the beginning and around N comparisons when ItemValue is near the end. On the
average, there are around N/2 comparisons. The total number of comparisons done by
linear search on average, thus can be calculated as follows:
Total comps = ½ * (comparisons when ItemValue is present in) + ½*(comparisions when
ItemValue is not present in)
 Total comparisons = ½ * (N/2) + ½*(N)
 Total comparisons = N/4 + N/2 = 3N/4 = O(N)

Asymptotically, therefore, the worst-case cost and the expected cost of linear search are
both O (n).

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

4.2.1.2 Binary Search Algorithm

This algorithm compares the ItemValue to be searched with the middle item of the input list
and if they are equal then we say the search is over. If it is greater than the middle item,
then we search in the right half recursively, otherwise we search in the left half recursively
until the middle item is equal to the input item.

Pseudo-Code:
Binary Search (List [], int ItemValue) {
int low = 0;
int mid;
int high = List.length - 1
while (low <= high) {
mid = (low + high)/2;
if (ItemValue < List [mid])
high = mid -1; // search in lower half
else if (List [mid] < ItemValue)
low = mid + 1; // search in upper half
else return mid; // success
}
return -1; // ItemValue is not in the array
}

Value addition: Animation


Heading Text: How Binary Search works???
Body Text: We are given a list: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
21 22 23 24 25. We try to find 24 in list. Figure Provided should be animated for interval
for search, Mid element to be found and the text provided at every step.

We try to search 24 in the list.

--------------------------Interval for search-----------------------------

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

Mid Element (Oops 13 != 24 but 13< 24)

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

----------Interval for search----------

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

Mid Element (Oops 19 != 24 but <24)

- Interval for search

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

(Oops still 22 != 24 but 22<24) Mid Element

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

Mid Element

Hurray....... FOUND 24!!!!!!

We assume List. length as N.

• Best case: Occurs when ItemValue to be searched is found at the middle position
examined in the given list. Therefore number of comparisons is 1.Thus O (1).
• Worst case: Occurs when ItemValue to be searched is not in the list. While Loop
executes until low <= high and size is halved in each iteration i.e.
N, N/2, N/4…………… N/2K…. 1 which gives an estimate like follows:
=> N/2K = 1 i.e. 2K = N
=> K=log2N steps;at each step no: of comparisions = 1. Thus gives total of O (log2
(N)) comparisions.
• Average case :We analyze iteration by iteration and find the number of
comparisons as follows:

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

1 item can be found with 1 comparison, 2 items can be found with 2


comparisons, 4 items can be found with 3 comparisons and so on…
 1 + 2 + 2 + 3 + 3 + 3 + 3 + … (up to log N steps)

 sum over all possibilities(probabilities) = i=0 to log2 N ( i *2i-1) = O (log2 N)

Asymptotically, therefore, the worst-case cost and the expected cost of binary search are
both O (log2 (N)).

4.2.2 Sorting Algorithms

4.2.2.1 Bubble Sort Algorithm

Bubble sort is a simple sorting algorithm. It works by repeatedly scanning through the list to
be sorted, comparing each pair of adjacent items and swapping them if they are not in the
right order. The pass through the list is repeated until no swaps are needed, which indicates
that the list is sorted. The algorithm is named after the behavior, smaller items "bubble" to
the top of the list.

Pseudo-Code:
Bubble Sort (List []) {
for i = 1 to List.length {
for j = 1 to to List.length – i {
if List[j] >List[j+1]
swap(A[j] and List[j+1])} } }

Value addition: Animation


Heading Text: How Bubble Sort works???
Body Text: We are given a list: 3 5 11 10 9 15. We try to sort the list. Figure Outer
loop depicts number of passes. Inner loop decides how many swaps in each pass are
required. As a story Board following figure has been given which need to be animated.

PASS 1

NO SWAP required
3 5 11 10 2 1

NO SWAP REQUIRED
3 5 11 10 2 1

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

SWAP REQUIRED
3 5 10 11 2 1

SWAP REQUIRED
3 5 10 2 11 1

SWAP REQUIRED
3 5 10 2 1 11

PASS 2

NO SWAP REQUIRED
3 5 10 2 1 11 NO SWAP REQUIRED

NO SWAP REQUIRED
3 5 10 2 1 11

SWAP REQUIRED
3 5 2 10 1 11

SWAP REQUIRED
3 5 2 1 10 11

PASS 3

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

3 5 2 1 10 11NO SWAP REQUIRED

3 2 5 1 10 SWAP REQUIRED

1111
SWAP REQUIRED
3 2 1 5 10 11
PASS 4

2 3 1 5 10 11 SWAP REQUIRED

2 1 3 5 10 11 SWAP REQUIRED

PASS 5

1 2 3 5 10 11 SWAP REQUIRED

Finally after 5 passes we are able to get the sorted list.

We assume List. length is n.

Considering the above pseudo code, the analysis for worst-case, best-case and average
case is all the same.

We can see that the outer loop runs n times. The only complexity in this analysis is in the
inner loop. Inner loop can never loop more than n times. Since the outer loop will make the
inner loop complete n times, the maximum number of comparisons are not more than O
(n2) times.

The first time the inner loop runs, it will make n-1 comparisons, the next time it will make
n-2 comparisons; and so on. Therefore, the actual number of comparisons is

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

, which has a value of (n-1) n/2 which is also O (n2).

Asymptotically, therefore, the worst-case cost and the expected cost of bubble sort are both
O ((N2)).

4.2.2.2 Insertion Sort Algorithm

It works by taking elements from the list one by one and inserting them in their correct
position into a new sorted list.

Pseudo-Code:

Insertion sort(List ([])


for j ← 2 to n
do key ← List [ j ]
Insert List [ j ] into the sorted sequence List [1 . . j -1]
i←j-1
while i > 0 and List [i] > key
do List [i + 1] ← List [i]
i←i–1
List [i + 1] ← key
Value addition: Animation
Heading Text: How Insertion Sort works???
Body Text: It is like sorting a hand of playing cards. We start with an empty left hand
and the cards facing down on the table. We remove one card at a time from the table, and
insert it into the correct position in the left hand. Compare it with each of the cards already
in the hand, from right to left. The cards held in the left hand are sorted. Following figure
provided should be animated.

7
10
14
5 20
C Correct Place for 7 is in between 5 and 10.

Similarly consider the following array and we apply insertion sort.

5 3 11 10 9 15
Institute of Lifelong learning, University of Delhi
Asymptotic Notations

3 5 11 10 9 15

3 5 11 10 9 15

3 5 10 11 9 15

3 5 9 10 11 15

3 5 9 10 11 15

We assume List. length is N.

Algo Steps Cost

for j = 2 to n c1 N
{
do key ← List [ j ] c2 N-1
Insert List [ j ] into the sorted sequence List [1 . . j -1] 0 N-1

i←j-1 c3 N-1


N
tj
while i > 0 and List [i] > key c4 j 2

 t  1
N
do List [i + 1] ← List [i] c5 j 2 j

 t  1
N
i←i–1 c6 j 2 j

List [i + 1] ← key c7 N-1

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Time Taken is:

T (n)  c1n  c2 (n  1)  c4 (n  1)  c5  t j  c6  t j  1  c7  t j  1  c8 (n  1)
n n n

j 2 j 2 j 2

Best case: Occurs when input list is already sorted. Insertion sort will iterate once through
the list, and, finding no items found out of order, so it will not shift any of the data. Thus it
is O (N).
As already sorted array, t j  1 T(n) = c1n + c2(n -1) + c4(n -1) + c5(n -1) + c8(n-1) = (c1 + c2 + c4 +
c5 + c8)n + (c2 + c4 + c5 + c8)
= an + b = (n)

Worst case: Occurs when the smallest item of the list is at the large end that is listed is
sorted in reverse order.
 n(n  1)  n(n  1) n(n  1)
T (n)  c1n  c2 (n  1)  c4 (n  1)  c5   1  c6  c7  c8 (n  1)
 2  2 2
 an2  bn  c
= (n2)

Average case: The average case is also quadratic O (N2), which makes insertion sort
impractical for sorting large arrays.

Asymptotically, therefore, the worst-case cost and the expected cost of insertion sort are
both O ((N2)).

We just discussed famous sorting algorithms. There are more sorting algorithms like quick
sort, merge sort etc. Reader is being suggested to read and analyze them as well.

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Summary
• Asymptotic Notation mathematically decides limiting behavior of a function.
• An algorithm is step wise procedure for solving a given problem.
• Asymptotic notations are important in analysis of algorithms i.e. measuring efficiency
and comparing the efficiencies of different algorithms.
• Categorization of algorithms is based on their asymptotic growth rate.
• Asymptotically analysis is measure of the execution of an algorithm, usually the time
or memory needed, given the problem size n, which is usually the number of items.
• Asymptotic upper and lower bounds can be asymptotically tight or non-tight.
• Tight bounds are Big oh, Big omega, Big Theta.
• Big oh: O (g (n)) = {f (n) n  n0, 0  f (n)  c g (n)  n > n0 and n, c>0}.
• Big Omega (g(n)) = {f (n) n  n0, 0  c g (n)  f (n)  n > n0 and n, c>0}.
• Big Theta ( g(n)) = {f (n) n  n0, 0  c1g(n)  f(n)  c2g(n)  n > n0 and n, c
c1>0, c2 >0}.
• We have to find constants c, c1>0, c2 >0 and value of n  n0 which satisfy the above
equalities to satisfy the notations.
• f (n) = (g (n)) if and only if f (n) = O (g (n)) and f (n) = (g (n)).

 Asymptotic loose bounds are small o and small omega.

• Little o: o (g(n)) = {f(n):  c > 0,  n0 > 0 such that  n  n0, we have 0  f(n) <
cg(n)}.

• Little omega:  (g(n)) = {f (n):  c > 0,  n 0 > 0 such that  n  n0, we have 0  c
g(n) < f(n)}.

 Asymptotic notations follow transitive, symmetric and reflexive.

 Asymptotic notations are used for measuring complexities of algorithms by


determining the cost of each operation or looping constructs present in program
code for algorithm.

• Asymptotic notations can be used in measuring the efficiency of s algorithms.

• Searching and Sorting are two famous problems from algorithm’s perspective.

• Searching: Linear searching, Binary Search.

• Sorting: Insertion Sort, Bubble sort etc.

• Comparisons of Complexities:

Worst case Average case Best case

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Linear search O(N) O(N) O(1)

Binary Search O(log N) O(log N) O(1)

Bubble Sort O (N2) O (N2) O (N)

Insertion sort O (N2) O (N2) O (N)

Exercises
1. Define asymptotic notation.
2. Describe an algorithm that interchanges the values of the variables x and y, using
only assignments. What is the minimum number of assignment statements needed
to do this? Give analysis in terms of primitive operations performed.
3. A computer performs 109 operations per second, what is the largest problem you can
solve in an hour if your algorithm takes 3n operations on problem of size n?
4. What is importance of asymptotic analysis?
5. Describe an algorithm that takes as input a list of n integers in non decreasing order
and produces the list of all values that occur more than once.
6. If input size is n, then an algorithm A can take 1000n steps, and other B takes n3
steps, which one is better? Justify for different values of n.
7. List the properties of an algorithm. Take an example of an algorithm and justify
whether these properties are satisfied.
8. What are the factors which affect the running time of an algorithm?
9. What is smallest value of n such that an algorithm whose running time is 100 n2 runs
faster than an algorithm whose running time is 2n on the same machine?
10. Describe an algorithm for finding both the largest and the smallest integers in a finite
sequence of integers.
11. Devise an algorithm that finds the sum of all the integers in a list.
12. Arrange the following in order of their growth rates values of n:
a) n3
b) log n
c) 2n
d) n log n
e) 22n
f) 100 n
g) N
13. Using definitions of asymptotic notations, prove or disprove the following:

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

5n 2  7n  20  O n 2  
n 2.1
 
O n 2

x /1000 + 3x 3 + 7x 2 – 9 = (x 4)
4

200n2 = O (n2) = (n2)


5n2 is (n)

14. Verify that f (n) is  (nm). Prove that it f (n) is (nm).


15. Prove or disprove f (n) = O (g (n)) implies g (n) =  (f (n)).
16. Let f (n) = (n + 8)1.7 + 7n + 5. Prove that f (n) is O (n 1.7).
17. Prove or disprove f (n) ∈ Ω (g (n), implies g (n) ∈ O (f (n))?
18. Which algorithm is better in the Big-Oh sense, and find out a problem size n0 such
that for any larger size n > n0 the chosen algorithm outperforms the other. If your
problems are of the size n <= 109, which algorithm will you recommend to use?
19. Is n2/4 is 
(n) or (n2)? 
n
20. Compute x , write its algorithm and calculate its running time and improve its
running time by proposing a new algorithm if possible.
21. Give the complexity of following code segments:
Assume sum initialized to 0;

a. for i = 1 to n
{sum = sum + i;}

for i = 1 to n2

{for j =1 to i

{sum = sum +i;} }

b. for i = 1 to n
{ for j =1 to i*i

{ for k = 1 to j

{ sum = sum +i; } }}

c. double x, y;
x = 3.5 ; y = 4.0;

for(int i = 0; i < n; i++){

list[i] = x * y;

x = 3.5 * x;

y = y + list[i];

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

d. for (int i = k; i < n; i = i * m){


sum = sum + i;

23. If f belongs to O (g) and g belongs to O(h),then f belongs to O(h),that is O is transitive,


then o, , Ω,  are also transitive. Prove.
24. When bubble sort performs best. Explain with example.

25. Which is more efficient: Linear search or Binary search and why? Explain with example.
26. Give asymptotic analysis of Insertion sort and binary search algorithms.
27. Prove or disprove that best case for insertion sort can be Ω(n). Can it be (n)?
28. How many comparisons are needed in best case of bubble sort?
29. Will it be efficient to use insertion sort for data of let’s say 100000 entries or 100
entries?
30. Trace insertion sort and bubble sort on following data and compare the two algorithms:

25 80 45 20 60 45

Glossary
1. Algorithm: A computable set of steps to achieve a desired result.
2. Array: An assemblage of items that is randomly accessible by integers, the index.
3. Asymptote: A line whose distance to a given curve tends toward zero.
4. Asymptotic bound: A curve representing the limit of a function. That is, the distance
between a function and the curve tends to zero. The function may or may not
intersect the bounding curve.
5. Asymptotically tight bound: When the asymptotic complexity of an algorithm exactly
matches the theoretically proved asymptotic complexity of the corresponding
problem. Informally, when an algorithm solves a problem at the theoretical
minimum.
6. Asymptotic lower bound: An asymptotic bound, as function of the size of the input,
on the best (fastest, least amount of space used, etc.) an algorithm can possibly
achieve to solve a problem. That is, n o algorithm can use fewer resources than the
bound.
7. Asymptotic Upper bound: An asymptotic bound, as function of the size of the input,
on the worst (slowest, most amount of space used, etc.) an algorithm will do to
solve a problem.
8. Big-Oh: A theoretical measure of the execution of an algorithm, usually the time or
memory needed, given the problem size n, which is usually the number of items.

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

Informally, saying some equation f(n) = O(g(n)) means it is less than some
constant multiple of g(n). The notation is read, "f of n is big oh of g of n".
9. Binary search: technique for quickly locating an item in a sequential list. The desired
key is compared to the data in the middle of a sequential index or in the middle of a
sequential file. The half that contains the data is then compared in the middle, and
so on, either until the key is located or a small enough group is isolated to be
sequentially searched.
10. Bubble Sort: A sorting technique that is typically used for sequencing small lists. It
starts by comparing the first item to the second, the second to the third and so on
until it finds one item out of order. It then swaps the two items and starts over. The
sort may alternate from the top of the list to the bottom and then from the bottom
to the top. The name comes from the notion that items are raised or "bubbled up"
to the top.
11. Complexity: The intrinsic minimum amount of resources, for instance, memory,
time, messages, etc., needed to solve a problem or execute an algorithm.
12. Function: A computation which takes some arguments or inputs and yields an
output. Any particular input yields the same output every time. More formally, a
mapping from each element in the domain to an element in the range. (2) A
subroutine which returns a value.
13. Insertion sort: A simple sorting technique that scans the sorted list, starting at the
beginning, for the correct insertion point for each of the items from the unsorted
list. Similar to the way people manually sort items, an insertion sort is not efficient
for large lists, but is relatively fast for adding a small number of new items
periodically.
14. Linear: Sequential or having a graph that is a straight line.
15. Little –o : A theoretical measure of the execution of an algorithm, usually the time
or memory needed, given the problem size n, which is usually the number of items.
Informally, saying some equation f(n) = o(g(n)) means f(n) becomes insignificant
relative to g(n) as n approaches infinity. The notation is read, "f of n is little oh of g
of n".
16. Little –omega : A theoretical measure of the execution of an algorithm, usually the
time or memory needed, given the problem size n, which is usually the number of
items. Informally, saying some equation f(n) = ω (g(n)) means g(n) becomes
insignificant relative to f(n) as n goes to infinity.
17. Monotonicity: Mathematics designating sequences, the successive members of which
either consistently increase or decrease but do not oscillate in relative value. Each
member of a monotone increasing sequence is greater than or equal to the
preceding member; each member of a monotone decreasing sequence is less than
or equal to the preceding member.
18. Notation: Any series of signs or symbols used to represent quantities or elements in
a specialized system, such as music or mathematics.
19. Omega: The Greek letter written as ω (n) or Ω (n). A theoretical measure of the
execution of an algorithm, usually the time or memory needed, given the problem
size n, which is usually the number of items. Informally, saying some equation f(n)
= Ω (g(n)) means it is more than some constant multiple of g(n).
20. Reflexive: Logic (of a relation) always holding between a term and itself.
21. Running Time: The execution time of an algorithm.

Institute of Lifelong learning, University of Delhi


Asymptotic Notations

22. Theta: A theoretical measure of the execution of an algorithm, usually the time or
memory needed, given the problem size n, which is usually the number of items.
Informally, saying some equation f(n) = Θ (g(n)) means it is within a constant
multiple of g(n). The equation is read, "f of n is theta g of n".
23. Time Complexity: The limiting behavior of the execution time of an algorithm when
the size of the problem goes to infinity.
24. Transitive: A binary relation R for which a R b and b R c implies a R c.
25. Search: To look for specific data in a file or an occurrence of text in a file. A search
implies sequential scanning of content or indexes in order to find the results rather
than a direct lookup. A search on the Web yields a list of Web pages that contain all
the words in the search criteria.
26. Space Complexity: The limiting behavior of the use of memory space of an
algorithm when the size of the problem goes to infinity.

References

Suggested Readings
1. Kenneth Rosen, Discrete Mathematics and Its Applications, Sixth Edition

2. T.H. Coremen, C.E. Leiserson, R. L. Rivest, Introduction to algorithms, Prentice Hall on


India (second edition)

Web Links

http://en.wikipedia.org/wiki/

http://www.itl.nist.gov/div897/sqg/dads/#A

http://www.pcmag.com/encyclopedia/

http://www.thefreedictionary.com/notation

Institute of Lifelong learning, University of Delhi

You might also like