2nd level _Cyber Security and Networks
Data Structures and Algorithms
Lecture 2
Dr. Taghreed Abdullah
1
Content
Time Complexity
Worst case, Best case and Average case analysis
Asymptotic Notation
Big oh notation, Omega notation, Theta notation
Array and its Operations
2
Time Complexity of Algorithms
Algorithm 1 Algorithm 2
For i=2 to N-1 For i=2 to 𝑁
if N divisible by i if N divisible by i
N is not prime N is not prime
End End
Iteration: N-1 Iteration: 𝑁-1
N=11 takes (11-2)=9ms N=11 takes (3-1)=2ms
N=101 takes 99ms N= 101 takes 9ms
N=10^6+3 takes 10^6ms=16.6 m N=10^6+3 takes 10^6ms=1sec
N=10^10+19 takes10^10ms=115 day N=10^10+19 takes106 min
3
Time Complexity of Algorithms
Time complexity is the amount of time taken by the algorithm to run.
It measures the time taken to execute each statement of the code in
an algorithm
Time complexity can be used by step count method with Asymptotic
nation
4
Time Complexity of Algorithms
Algorithm
Machine 1
Machine 2
Does the Algorithm take the same time in two machines?
Each Machine has specific features
5
Time Complexity of Algorithms
Running time depends on some factors:
Computer speed and processor
Programming language
The compiler
Input
6
Time Complexity of Algorithms
How to represent quality of an algorithm
How much does the code take on my computer?
More general Way
Mathematical Equation: a relation between input size and time
7
Time Complexity of Algorithms
Frequency count method
Arithmetic operators (- . +, *, /) 1 unit of time
Assignment operator (=) 1 unit of time
Comparison (x<y)
1 unit of time
Return Statement
1 unit of time
8
Time Complexity of Algorithms
1. Constant Time
𝑭(𝒏) = 𝟐 + 𝟏 = 𝟑
Function sum(X,Y)
No Effect for X, Y
result=X+Y 2
𝑶(𝟏)
return result 1 𝑂(1): Constant
End function The operation does not
depend on the size of its input
9
Time Complexity of Algorithms
2. Linear Time
Function SumArray (arr,n) 𝑭 𝒏 = 𝟏 + 𝟐𝒏 + 𝟐 + 𝟐𝒏 + 𝟏
sum=0 1
for (i=0 ; i<n; i++) 2n+2 𝑭 𝒏 = 𝟒𝒏 + 𝟒 𝑶(𝒏)
{
𝑶(𝒏): linear
sum=sum+arr[i] 2n
} the run time complexity is
return sum 1
proportionate to the size of n
End function
Note:2n+2 we can write it n as next example
10
Time Complexity of Algorithms
3. Quadratic Time
Function 2Darray (arr,n)
𝑭(𝒏) = 𝒏 + 𝒏𝟐 + 𝒏𝟐 +1
for (i=0 ; i<n; i++) n
{
𝑭 𝒏 = 𝟐𝒏𝟐 + 𝒏 + 𝟏 𝑶(𝒏𝟐 )
for (j=0 ; j<n; j++) n×n
{
print arr[i] n×n 𝑶(𝒏𝟐 ): Quadratic
} When the runtime scales
} 1
return sum
quadratically with the input
End function
11
Asymptotic Notations
Big O Notation
• Time taken in term of input size
• Rate of growth
How much time increases
when input size increases?
12
Asymptotic Notations
• Asymptotic notations of an algorithms is a mathematical representation of its
complexity
• Asymptotic analysis of an algorithm refers to defining the mathematical
foundation/framing of its run-time performance.
• Using asymptotic analysis, we can very well conclude the best case, average case,
and worst case scenario of an algorithm.
• Asymptotic analysis refers to computing the running time of any operation in
mathematical units of computation.
13
Asymptotic Notations
For example:
The running time of one operation is computed as:
𝑓(𝑛): The running time will increase linearly with the increase in 𝑛
𝑔 2𝑛 :The running time of this operation will increase exponentially
when 𝑛 increases
14
Asymptotic Notations
The time required by an algorithm falls under three types −
Best Case − Minimum time required the basic operations got to executed.
4 7 6 8 9
4 𝑶(𝟏)
Worst Case − Maximum time required for program execution.
4 7 6 8 9
9 9 9 9 9 𝑶(𝒏)
15
Asymptotic Notations
• Average Case − Average time required for program execution.
Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the running time complexity of
an algorithm.
Ο Notation (big O) : used to define upper bound of an algorithm in terms of time complexity.
It describe the worst case
Ω Notation (omega) : used to define lower bound of an algorithm in terms of time
complexity. It describes the best case
θ Notation (theta) it describes the average case
16
Asymptotic Notations
In the theoretical analysis of algorithms it is common to estimate their complexity in the
asymptotic sense: Estimate the complexity function for arbitrarily large input
Big O notation 𝑶 , omega notation 𝝮, and theta notation 𝞱 are often used to this end
Usually, asymptotic estimates are used because different implementations of the
same algorithm may differ in complexity
17
Asymptotic Notation
What Big O is?
Simplified analysis of an algorithm’s efficiency
1. Complexity in terms of input size, N
2. Machine independent
3. Basic computer steps
18
ARRAY
19
Array
Why Array?
• You want to store 5 numbers in a program
o No problem. You define five int variables: int num1, num2, num3, num4, num5;
o Easy enough, right?
o But what if you want to store 1000 numbers?
Are you really going to make 1000 separate variables?
int num1, num2,..., num998, num999, num1000;
That would be CRAZY!
• So what is the solution?
o A data structure! Specifically, an array!
An array is one of the most common data structures.
20
Array
Features of Array
An array is one of the most common data structures
An Array is a collection of elements of the same data type, stored at a contiguous
memory location.
An array is a data structure that is used to store multiple values of similar data
types in a contiguous memory locations under one name.
Elements of an array can be accessed using their indices.
Once an array is declared its size remains constant throughout the program.
21
Array
Array_name[index]
For example, in C++
cout<<data[4]; will display 0
• data[3] = 99; will replace -3 with 99
• data[ -1 ] illegal
• data[ 10 ] illegal (10 > upper bound)
• data[ 1.5 ] illegal
• data[ 0 ] OK
• data[ 9 ] OK
22
Array
Array Dimensionality
One dimensional (just a linear list)
5 10 18 30 45 50 60 65 70 80
• Only one subscript is required to access an individual element
Two dimensional (matrix or table)
Col 0 Col 1 Col 2 Col 3
Row 0 20 25 60 40
Row 1 30 15 70 90
23
Array
2D Array
• Given the following array (whose name is ‘M’)
20 25 60 40
30 15 70 90
Two indices/subscripts are now required to reference a
given cell
• M[0][0] ?
• M[1][2] ?
• M[3][4] ?
24
Array
Array Representation
• The representation of an array can be defined by
its declaration.
• A declaration means allocating memory for an
array of a given size.
25
Array
Remember: “Location of next index depends on the data type we use”.
• For example, in the next Figure the data type in the array is char, that means
each character has only one byte then
• We notice that the memory location of the first element is 200, and the
memory location of the next element is 201 .
26
Array
Memory and Initialization
• When the array is created, memory is reserved for its contents
• You can also specify the values for the array instead of using the
new operator
• Example:
int grade[] = {95, 93, 88}; //array of 3 ints
• To print the element at location number 1.
cout<< grade[1];
27
ARRAY
As a Data Structure
28
What is an Array
• An array is a data structure
• An array is a collection of items of the same data type stored at
contiguous memory locations.
• It is a collection of variables of the same type
• Examples:
• An array of student grades
• An array of student names
• An array of objects (OOP perspective!)
29
What is an Array
Homogeneity
All elements in an array must have the same data type
Contiguous Memory
Array elements (or their references) are stored in contiguous/consecutive
memory locations
Direct access to an element
Index reference is used to access it directly
Static data structure
An array cannot grow or shrink during program execution…the size is fixed
30
ARRAY
Operations
31
Array Operation
1. Accessing/indexing an element using its index
• Performance is very fast
• We can access an index immediately without searching
• myArray [1250] = 55;
• we immediately access array spot 1250 of myArray
void access(double a[] , int index) {
a[index] = a[index] + a[index] * 0.20;
Cout<<“Updated value “<< a[index]);
}
32
Array Operation
2. Traversing an array
• Display all contents of an array
All elements will be displayed
Each element will be displayed exactly once
void display(int a[], int length)
{
for (int i=0; i<length; i++)
cout<<a[i]);
}
33
Array Operation
3. Insertion: add an element at a certain index
• What if we want to add an element at the beginning?
This would be a very slow operation! Why?
• Because we would have to shift ALL other elements over one position
• What if we add an element at the end?
• It would be FAST. Why? No need to shift.
34
Array Operation
(a) Insert operation in an unsorted array at the end :
• In an unsorted array, the insert operation is faster as compared
to a sorted array because we don’t have to care about the
position at which the element is to be placed.
35
Array Operation
(b) Insert an element at any position
• Insert operation in an array at any position can be performed by shifting
elements to the right, which are on the right side of the required
position
36
Array Operation
4. Deletion: remove an element at a certain index
• Remove an element at the beginning of the array
• Performance is again very slow. Because ALL elements need to shift
one position backwards
• Remove an element at the end of an array
• Very fast because of no shifting needed
37
Array Operation
Delete Operation
• In the delete operation, the element to be deleted is searched using the (1)
linear search, and then (2) the delete operation is performed followed by(3)
shifting the elements.
38
Array Operation
5. Merging: combining the data of two or more arrays into one
6. Searching through the array:
Depends on the algorithm. Some algorithms are faster than others.
More details about searching will be in next lecture.
Linear Search
Binary Search
39
40