0% found this document useful (0 votes)
38 views50 pages

Unit 1 Ads New

The document provides an overview of data structures including arrays, stacks, queues, linked lists, and their applications. It also discusses algorithm analysis including asymptotic notations, time complexity analysis using Big O notation, and analyzing recursive programs.

Uploaded by

sirishaksnlp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views50 pages

Unit 1 Ads New

The document provides an overview of data structures including arrays, stacks, queues, linked lists, and their applications. It also discusses algorithm analysis including asymptotic notations, time complexity analysis using Big O notation, and analyzing recursive programs.

Uploaded by

sirishaksnlp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 50

UNIT- I

Overview of Data Structures - Arrays, Stacks, Queues, linked lists , Linked stacks and Linked
queues, Applications

Algorithm Analysis - Efficiency of algorithms, Asymptotic Notations, Time complexity of an


algorithm using O notation, Polynomial Vs Exponential Algorithms, Average, Best, and Worst
Case Complexities, Analyzing Recursive Programs.

Overview of Data Structures


Introduction to Data Structures
Data Structure is a way of collecting and organizing data in such a way that we can perform
operations on these data in an effective way. Data Structures is about rendering data elements in
terms of some relationship, for better organization and storage. For example, we have data
player's name "Virat" and age 26. Here "Virat" is of String data type and 26 is of integer data
type.
We can organize this data as a record like Player record. Now we can collect and store player's
records in a file or database as a data structure. For example: "Dhoni" 30, "Gambhir" 31,
"Sehwag" 33
In simple language, Data Structures are structures programmed to store ordered data, so that
organized in memory.
various operations canItbe
should be designed
performed and implemented
on it easily. in such
It represents a way that of
the knowledge it reduces
data to the
be

complexity and increases the efficiency.

Basic types of Data Structures


Anything that can store data can be called as a data structure, hence Integer, Float, Boolean, Char
etc, all are data structures. They are known as Primitive Data Structures.
Then we also have some complex Data Structures, which are used to store large and connected
data. Some example of Abstract Data Structure are :

 Linked List
 Tree
 Graph
 Stack, Queue etc.
All these data structures allow us to perform different operations on data. We select these data
structures based on which type of operation is required. We will look into these data structures in
more details in our later lessons.
The data structures can also be classified on the basis of the following characteristics:

Characterstic Description

Linear In Linear data structures,the data items are arranged in a linear sequence.
Example: Array

Non-Linear In Non-Linear data structures,the data items are not in sequence.


Example: Tree, Graph

Homogeneous In homogeneous data structures,all the elements are of same type. Example: Array

Non- In Non-Homogeneous data structure, the elements may or may not be of the same
Homogeneous type. Example: Structures

Static Static data structures are those whose sizes and structures associated memory
locations are fixed, at compile time. Example: Array

Dynamic Dynamic structures are those which expands or shrinks depending upon the
program need and its execution. Also, their associated memory locations changes.
Example: Linked List created using pointers

What is an Algorithm ?
An algorithm is a finite set of instructions or logic, written in order, to accomplish a certain
predefined task. Algorithm is not the complete code or program, it is just the core logic(solution)
of a problem, which can be expressed either as an informal high level description
as pseudocode or using a flowchart.
Every Algorithm must satisfy the following properties:

1. Input- There should be 0 or more inputs supplied externally to the algorithm.


2. Output- There should be atleast 1 output obtained.
3. Definiteness- Every step of the algorithm should be clear and well defined.
4. Finiteness- The algorithm should have finite number of steps.
5. Correctness- Every step of the algorithm must generate a correct output.
An algorithm is said to be efficient and fast, if it takes less time to execute and consumes less
memory space. The performance of an algorithm is measured on the basis of following properties:

1. Time Complexity
2. Space Complexity

Space Complexity
Its the amount of memory space required by the algorithm, during the course of its execution.
Space complexity must be taken seriously for multi-user systems and in situations where limited
memory is available.
An algorithm generally requires space for following components :

 Instruction Space : Its the space required to store the executable version of the program. This
space is fixed, but varies depending upon the number of lines of code in the program.
 Data Space : Its the space required to store all the constants and variables value.
 Environment Space : Its the space required to store the environment information needed to
resume the suspended function.

Time Complexity
Time Complexity is a way to represent the amount of time needed by the program to run till its
completion. We will study this in details in later sections.

Time Complexity of Algorithms


Time complexity of an algorithm signifies the total time required by the program to run till its
completion. The time complexity of algorithms is most commonly expressed using the big O
notation.
Time Complexity is most commonly estimated by counting the number of elementary functions
performed by the algorithm. And since the algorithm's performance may vary with different types
of input data, hence for an algorithm we usually use the worst-case Time complexity of an
algorithm because that is the maximum time taken for any input size.
Calculating Time Complexity
Now lets tap onto the next big topic related to Time complexity, which is How to Calculate Time
Complexity. It becomes very confusing some times, but we will try to explain it in the simplest
way.
Now the most common metric for calculating time complexity is Big O notation. This removes all
constant factors so that the running time can be estimated in relation to N, as N approaches
infinity. In general you can think of it like this :
statement;
Above we have a single statement. Its Time Complexity will be Constant. The running time of
the statement will not change in relation to N.
for(i=0; i < N; i++)
{
statement;
}
The time complexity for the above algorithm will be Linear. The running time of the loop is
directly proportional to N. When N doubles, so does the running time.

for(i=0; i < N; i++)


{
for(j=0; j < N;j++)
{
statement;
}
}
This time, the time complexity for the above code will be Quadratic. The running time of the
two loops is proportional to the square of N. When N doubles, the running time increases by N *
N.

while(low <= high)


{
mid = (low + high) / 2;
if (target < list[mid])
high = mid - 1;
else if (target > list[mid])
low = mid + 1;
else break;
}
This is an algorithm to break a set of numbers into halves, to search a particular field(we will
study this in detail later). Now, this algorithm will have a Logarithmic Time Complexity. The
running time of the algorithm is proportional to the number of times N can be divided by 2(N is
high-low here). This is because the algorithm divides the working area in half with each iteration.

void quicksort(int list[], int left, int right)


{
int pivot = partition(list, left, right);
quicksort(list, left, pivot - 1);
quicksort(list, pivot + 1, right);
}
Taking the previous algorithm forward, above we have a small logic of Quick Sort(we will study
this in detail later). Now in Quick Sort, we divide the list into halves every time, but we repeat the
iteration N times(where N is the size of list). Hence time complexity will be N*log( N ). The
running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is
a combination of linear and logarithmic.
NOTE : In general, doing something with every item in one dimension is linear, doing something
with every item in two dimensions is quadratic, and dividing the working area in half is
logarithmic.

Types of Notations for Time Complexity


Now we will discuss and understand the various notations used for Time Complexity.

1. Big Oh denotes "fewer than or the same as" <expression> iterations.


2. Big Omega denotes "more than or the same as" <expression> iterations.
3. Big Theta denotes "the same as" <expression> iterations.
4. Little Oh denotes "fewer than" <expression> iterations.
5. Little Omega denotes "more than" <expression> iterations.

…………………………………………………………………………………………………...

Arrays:

of differentwe
Whenever variables.
want to As thewith
work number
largeofnumber
variables
of are
dataincreasing,
values, wecomplexity
need to useofthat
the much
program also
number
which we need to work with large number of similar data values. To make this work more easy, C
programming language provides a concept called "Array".

An array is a variable which can store multiple values of same data type at a time.

An array can also be defined as follows...

"Collection of similar data items stored in continuous memory locations with single
name".

To understand the concept of arrays, consider the following example declaration.

int a, b, c;

Here, the compiler allocates 2 bytes of memory with name 'a', another 2 bytes of memory with
name 'b' and more 2 bytes with name 'c'. These three memory locations are may be in sequence or
may not be in sequence. Here these individual variables can store only one value at a time.

Now consider the following declaration...

int a[3];

Here, the compiler allocates total 6 bytes of continuous memory locations with single name 'a'.
But allows to store three different integer values (each in 2 bytes of memory) at a time. And
memory is organized as follows...

That means all these three memory locations are named as 'a'. But "how can we refer individual
elements?" is the big question. Answer for this question is, compiler not only allocates memory,
but also assigns a numerical value to each individual element of an array. This numerical value is
called as "Index". Index values for the above example are as follows...

The individual elements of an array are identified using the combination of 'name' and 'index' as
follows...

arrayName[indexValue]

For the above example the individual elements can be referred to as follows...

If I want to assign a value to any of these memory locations (array elements), we can assign as
follows...

a[1] = 100;

The result will be as follows...

Sparse Matrix
In computer programming, a matrix can be defined with a 2-dimensional array. Any array with
'm' columns and 'n' rows represents a mXn matrix. There may be a situation in which a matrix
contains more number of ZERO values than NON-ZERO values. Such matrix is known as sparse
matrix.

Sparse matrix is a matrix which contains very few non-zero elements.

When a sparse matrix is represented with 2-dimensional array, we waste lot of space to represent
that matrix. For example, consider a matrix of size 100 X 100 containing only 10 non-zero
elements. In this matrix, only 10 spaces are filled with non-zero values and remaining spaces of
matrix are filled with zero. That means, totally we allocate 100 X 100 X 2 = 20000 bytes of space
to store this integer matrix. And to access these 10 non-zero elements we have to make scanning
for 10000 times.

A sparse matrix can be represented by using TWO representations, those are as follows...

1. Triplet Representation
2. Linked Representation

Triplet Representation

In this representation, we consider only non-zero values along with their row and column index
values. In this representation, the 0th row stores total rows, total columns and total non-zero values
in the matrix.

For example, consider a matrix of size 5 X 6 containing 6 number of non-zero values. This matrix
can be represented as shown in the image...
In above example matrix, there are only 6 non-zero elements ( those are 9, 8, 4, 2, 5 & 2) and
matrix size is 5 X 6. We represent this matrix as shown in the above image. Here the first row in
the right side table is filled with values 5, 6 & 6 which indicates that it is a sparse matrix with 5
rows, 6 columns & 6 non-zero values. Second row is filled with 0, 4, & 9 which indicates the
value in the matrix at 0th row, 4th column is 9. In the same way the remaining non-zero values
also follows the similar pattern.

Linked Representation

linked list,representation,
In linked we use two different nodes list
we use linked namely
data header
structurenode and element
to represent node.matrix.
a sparse HeaderInnode
this

consists of three fields and element node consists of five fields as shown in the image...

Consider the above same sparse matrix used in the Triplet representation. This sparse matrix can
be represented using linked representation as shown in the below image...

In above representation, H0, H1,...,H5 indicates the header nodes which are used to represent
indexes. Remaining nodes are used to represent non-zero elements in the matrix, except the very
first node which is used to represent abstract information of the sparse matrix (i.e., It is a matrix
of 5 X 6 with 6 non-zero elements).

In this representation, in each row and column, the last node right field points to it's respective
header node.
Stack

Stack is a linear data structure in which the insertion and deletion operations are performed at
is known
only as "top".
one end. Thatadding
In a stack, means,and
new elementofiselements
removing added ataretop of the stack
performed and position
at single an element is
which

removed from the top of the stack. In stack, the insertion and deletion operations are performed
based on LIFO (Last In First Out) principle.

In a stack, the insertion operation is performed using a function called "push" and deletion
operation is performed using a function called "pop".

In the figure, PUSH and POP operations are performed at top position in the stack. That means,
both the insertion and deletion operations are performed at one end (i.e., at Top)

A stack data structure can be defined as follows...

Stack is a linear data structure in which the operations are performed based on LIFO
principle.

Stack can also be defined as

"A Collection of similar data items in which both insertion and deletion operations are
performed based on LIFO principle".

Example

If we want to create a stack by inserting 10,45,12,16,35 and 50. Then 10 becomes the bottom
most element and 50 is the top most element. Top is at 50 as shown in the image below...
The following operations are performed on the stack...

1. Push (To insert an element on to the stack)


2. Pop (To delete an element from the stack)
3. Display (To display elements of the stack)

Stack data structure can be implement in two ways. They are as follows...

1. Using Array
2. Using Linked List

When stack is implemented using array, that stack can organize only limited number of elements.
When stack is implemented using linked list, that stack can organize unlimited number of
elements.

Stack Using Array

A stack data structure can be implemented using one dimensional array. But stack implemented
using array, can store only fixed number of data values. This implementation is very simple, just
define a one dimensional array of specific size and insert or delete the values into that array by
using LIFO principle with the help of a variable 'top'. Initially top is set to -1. Whenever we
want to insert a value into the stack, increment the top value by one and then insert. Whenever we
want to delete a value from the stack, then delete the top value and decrement the top value by
one.

A stack can be implemented using array as follows...

Before implementing actual operations, first follow the below steps to create an empty stack.

 Step 1: Include all the header files which are used in the program and define a
constant 'SIZE' with specific value.
 Step 2: Declare all the functions used in stack implementation.
 Step 3: Create a one dimensional array with fixed size (int stack[SIZE])
 Step 4: Define a integer variable 'top' and initialize with '-1'. (int top = -1)
 Step 5: In main method display menu with list of operations and make suitable function
calls to perform operation selected by the user on the stack.

push(value) - Inserting value into the stack

In a stack, push() is a function used to insert an element into the stack. In a stack, the new element
is always inserted at topposition. Push function takes one integer value as parameter and inserts
that value into the stack. We can use the following steps to push an element on to the stack...

 Step 1: Check whether stack is FULL. (top == SIZE-1)


 Step 2: If it is FULL, then display "Stack is FULL!!! Insertion is not possible!!!" and
terminate the function.
 Step 3: If it is NOT FULL, then increment top value by one (top++) and set stack[top] to
value (stack[top] = value).

pop() - Delete a value from the Stack

In a stack, pop() is a function used to delete an element from the stack. In a stack, the element is
always deleted from topposition. Pop function does not take any value as parameter. We can use
the following steps to pop an element from the stack...

 Step 1: Check whether stack is EMPTY. (top == -1)


 Step 2: If it is EMPTY, then display "Stack is EMPTY!!! Deletion is not
possible!!!" and terminate the function.
 Step 3: If it is NOT EMPTY, then delete stack[top] and decrement top value by one
(top--).

display() - Displays the elements of a Stack

We can use the following steps to display the elements of a stack...

 Step 1: Check whether stack is EMPTY. (top == -1)


 Step 2: If it is EMPTY, then display "Stack is EMPTY!!!" and terminate the function.
 Step 3: If it is NOT EMPTY, then define a variable 'i' and initialize with top.
Display stack[i] value and decrement i value by one (i--).
 Step 3: Repeat above step until i value becomes '0'.

Queue ADT

Queue is a linear data structure in which the insertion and deletion operations are performed at
two different ends. In a queue data structure, adding and removing of elements are performed at
two different positions. The insertion is performed at one end and deletion is performed at other
end. In a queue data structure, the insertion operation is performed at a position which is known
as 'rear' and the deletion operation is performed at a position which is known as 'front'. In queue
data structure, the insertion and deletion operations are performed based on FIFO (First In First
Out) principle.

In a queue data structure, the insertion operation is performed using a function called
"enQueue()" and deletion operation is performed using a function called "deQueue()".

Queue data structure can be defined as follows...

Queue data structure is a linear data structure in which the operations are performed
based on FIFO principle.

A queue can also be defined as

"Queue data structure is a collection of similar data items in which insertion and deletion
operations are performed based on FIFO principle".

Example

Queue after inserting 25, 30, 51, 60 and 85.

The following operations are performed on a queue data structure...

1. enQueue(value) - (To insert an element into the queue)


2. deQueue() - (To delete an element from the queue)
3. display() - (To display the elements of the queue)

Queue data structure can be implemented in two ways. They are as follows...
1. Using Array
2. Using Linked List

When a queue is implemented using array, that queue can organize only limited number of
elements. When a queue is implemented using linked list, that queue can organize unlimited
number of elements.

Single Linked List

When we want to work with unknown number of data values, we use a linked list data structure to
that each that
organize element
data.links to its
Linked listnext
is aelement in the
linear data sequence.
structure thatEach element
contains in a linked
sequence list is called
of elements such

as "Node".

Simply a list is a sequence of data, and linked list is a sequence of data linked with each other.

The formal definition of a single linked list is as follows...

Single linked list is a sequence of elements in which every element has link to its next
element in the sequence.

In any single linked list, the individual element is called as "Node". Every "Node" contains two
fields, data and next. The data field is used to store actual value of that node and next field is
used to store the address of the next node in the sequence.

The graphical representation of a node in a single linked list is as follows...

NOTE:

☀ In a single linked list, the address of the first node is always stored in a reference node known
as "front" (Some times it is also known as "head").
☀ Always next part (reference part) of the last node must be NULL.
Example
In a single linked list we perform the following operations...

1. Insertion
2. Deletion
3. Display

Before we implement actual operations, first we need to setup empty list. First perform the
following steps before implementing actual operations.

 Step 1: Include all the header files which are used in the program.
 Step 2: Declare all the user defined functions.
 Step 3: Define a Node structure with two members data and next
 Step 4: Define a Node pointer 'head' and set it to NULL.
 Step 4: Implement the main method by displaying operations menu and make suitable
function calls in the main method to perform user selected operation.

Insertion

In a single linked list, the insertion operation can be performed in three ways. They are as
follows...

1. Inserting At Beginning of the list


2. Inserting At End of the list
3. Inserting At Specific location in the list

Inserting At Beginning of the list

We can use the following steps to insert a new node at beginning of the single linked list...

 Step 1: Create a newNode with given value.


 Step 2: Check whether list is Empty (head == NULL)
 Step 3: If it is Empty then, set newNode→next = NULL and head = newNode.
 Step 4: If it is Not Empty then, set newNode→next = head and head = newNode.

Inserting At End of the list

We can use the following steps to insert a new node at end of the single linked list...

 Step 1: Create a newNode with given value and newNode → next as NULL.
 Step 2: Check whether list is Empty (head == NULL).
 Step 3: If it is Empty then, set head = newNode.
 Step 4: If it is Not Empty then, define a node pointer temp and initialize with head.
 Step 5: Keep moving the temp to its next node until it reaches to the last node in the list
(until temp → next is equal to NULL).
 Step 6: Set temp → next = newNode.

Inserting At Specific location in the list (After a Node)

We can use the following steps to insert a new node after a node in the single linked list...

 Step 1: Create a newNode with given value.


 Step 2: Check whether list is Empty (head == NULL)
 Step 3: If it is Empty then, set newNode → next = NULL and head = newNode.
 Step 4: If it is Not Empty then, define a node pointer temp and initialize with head.
 Step 5: Keep moving the temp to its next node until it reaches to the node after which we
want to insert the newNode (until temp1 → data is equal to location, here location is the
node value after which we want to insert the newNode).
 Step 6: Every time check whether temp is reached to last node or not. If it is reached to
last node then display 'Given node is not found in the list!!! Insertion not
possible!!!' and terminate the function. Otherwise move the temp to next node.

 Step 7: Finally, Set 'newNode → next = temp → next' and 'temp → next = newNode'

Deletion

In a single linked list, the deletion operation can be performed in three ways. They are as
follows...

1. Deleting from Beginning of the list


2. Deleting from End of the list
3. Deleting a Specific Node

Deleting from Beginning of the list

We can use the following steps to delete a node from beginning of the single linked list...

 Step 1: Check whether list is Empty (head == NULL)


 Step 2: If it is Empty then, display 'List is Empty!!! Deletion is not possible' and
terminate the function.
 Step 3: If it is Not Empty then, define a Node pointer 'temp' and initialize with head.
 Step 4: Check whether list is having only one node (temp → next == NULL)
 Step 5: If it is TRUE then set head = NULL and delete temp (Setting Empty list
conditions)
 Step 6: If it is FALSE then set head = temp → next, and delete temp.

Deleting from End of the list

We can use the following steps to delete a node from end of the single linked list...

 Step 1: Check whether list is Empty (head == NULL)


 Step 2: If it is Empty then, display 'List is Empty!!! Deletion is not possible' and
terminate the function.
 Step 3: If it is Not Empty then, define two Node pointers 'temp1' and 'temp2' and
initialize 'temp1' with head.
 Step 4: Check whether list has only one Node (temp1 → next == NULL)
 Step 5: If it is TRUE. Then, set head = NULL and delete temp1. And terminate the
function. (Setting Empty list condition)
 Step 6: If it is FALSE. Then, set 'temp2 = temp1 ' and move temp1 to its next node.
Repeat the same until it reaches to the last node in the list. (until temp1 →
next == NULL)
 Step 7: Finally, Set temp2 → next = NULL and delete temp1.

Deleting a Specific Node from the list

We can use the following steps to delete a specific node from the single linked list...

 Step 1: Check whether list is Empty (head == NULL)


 Step 2: If it is Empty then, display 'List is Empty!!! Deletion is not possible' and
terminate the function.
 Step 3: If it is Not Empty then, define two Node pointers 'temp1' and 'temp2' and
initialize 'temp1' with head.
 Step 4: Keep moving the temp1 until it reaches to the exact node to be deleted or to the
last node. And every time set 'temp2 = temp1' before moving the 'temp1' to its next node.
 Step 5: If it is reached to the last node then display 'Given node not found in the list!
Deletion not possible!!!'. And terminate the function.
 Step 6: If it is reached to the exact node which we want to delete, then check whether list
is having only one node or not
 Step 7: If list has only one node and that is the node to be deleted, then
set head = NULL and delete temp1 (free(temp1)).
 Step 8: If list contains multiple nodes, then check whether temp1 is the first node in the
list (temp1 == head).
 Step 9: If temp1 is the first node then move the head to the next node (head = head →
next) and delete temp1.
 Step 10: If temp1 is not first node then check whether it is last node in the list (temp1 →
next == NULL).
 Step 11: If temp1 is last node then set temp2 → next = NULL and
delete temp1 (free(temp1)).
 Step 12: If temp1 is not first node and not last node then set temp2 → next = temp1 →
next and delete temp1(free(temp1)).

Displaying a Single Linked List

We can use the following steps to display the elements of a single linked list...

 Step 1: Check whether list is Empty (head == NULL)


 Step 2: If it is Empty then, display 'List is Empty!!!' and terminate the function.
 Step 3: If it is Not Empty then, define a Node pointer 'temp' and initialize with head.
 Step 4: Keep displaying temp → data with an arrow (--->) until temp reaches to the last
node
 Step 5: Finally display temp → data with arrow pointing to NULL (temp → data --->
NULL).

Circular Linked List

In single linked list, every node points to its next node in the sequence and the last node points
NULL. But in circular linked list, every node points to its next node in the sequence but the last
node points to the first node in the list.

Circular linked list is a sequence of elements in which every element has link to its next
element in the sequence and the last element has a link to the first element in the sequence.

That means circular linked list is similar to the single linked list except that the last node points to
the first node in the list

Example
In a circular linked list, we perform the following operations...

1. Insertion
2. Deletion
3. Display

Before we implement actual operations, first we need to setup empty list. First perform the
following steps before implementing actual operations.

 Step 1: Include all the header files which are used in the program.
 Step 2: Declare all the user defined functions.
 Step 3: Define a Node structure with two members data and next
 Step 4: Define a Node pointer 'head' and set it to NULL.
 Step 4: Implement the main method by displaying operations menu and make suitable
function calls in the main method to perform user selected operation.

Insertion

In a circular linked list, the insertion operation can be performed in three ways. They are as
follows...

1. Inserting At Beginning of the list


2. Inserting At End of the list
3. Inserting At Specific location in the list

Inserting At Beginning of the list

We can use the following steps to insert a new node at beginning of the circular linked list...

 Step 1: Create a newNode with given value.


 Step 2: Check whether list is Empty (head == NULL)
 Step 3: If it is Empty then, set head = newNode and newNode→next = head .
 Step 4: If it is Not Empty then, define a Node pointer 'temp' and initialize with 'head'.
 Step 5: Keep moving the 'temp' to its next node until it reaches to the last node (until
'temp → next == head').
 Step 6: Set 'newNode → next =head', 'head = newNode' and 'temp → next = head'.
Inserting At End of the list

We can use the following steps to insert a new node at end of the circular linked list...

 Step 1: Create a newNode with given value.


 Step 2: Check whether list is Empty (head == NULL).
 Step 3: If it is Empty then, set head = newNode and newNode → next = head.
 Step 4: If it is Not Empty then, define a node pointer temp and initialize with head.
 Step 5: Keep moving the temp to its next node until it reaches to the last node in the list
(until temp → next == head).
 Step 6: Set temp → next = newNode and newNode → next = head.

Inserting At Specific location in the list (After a Node)

We can use the following steps to insert a new node after a node in the circular linked list...

 Step 1: Create a newNode with given value.


 Step 2: Check whether list is Empty (head == NULL)
 Step 3: If it is Empty then, set head = newNode and newNode → next = head.
 Step 4: If it is Not Empty then, define a node pointer temp and initialize with head.
 Step 5: Keep moving the temp to its next node until it reaches to the node after which we
want to insert the newNode (until temp1 → data is equal to location, here location is the
node value after which we want to insert the newNode).
 Step 6: Every time check whether temp is reached to the last node or not. If it is reached
to last node then display 'Given node is not found in the list!!! Insertion not
possible!!!' and terminate the function. Otherwise move the temp to next node.
 Step 7: If temp is reached to the exact node after which we want to insert the newNode
then check whether it is last node (temp → next == head).
 Step 8: If temp is last node then set temp → next = newNode and newNode →
next = head.
 Step 8: If temp is not last node then set newNode → next = temp → next and temp →
next = newNode.
Deletion

In a circular linked list, the deletion operation can be performed in three ways those are as
follows...

1. Deleting from Beginning of the list


2. Deleting from End of the list
3. Deleting a Specific Node

Deleting from Beginning of the list

We can use the following steps to delete a node from beginning of the circular linked list...
Step 1: Check whether list is Empty (head == NULL)
 Step 2: If it is Empty then, display 'List is Empty!!! Deletion is not possible' and
terminate the function.
 Step 3: If it is Not Empty then, define two Node pointers 'temp1' and 'temp2' and
initialize both 'temp1' and 'temp2' with head.
 Step 4: Check whether list is having only one node (temp1 → next == head)
 Step 5: If it is TRUE then set head = NULL and delete temp1 (Setting Empty list
conditions)
 Step 6: If it is FALSE move the temp1 until it reaches to the last node. (until temp1 →
next == head )
 Step 7: Then set head = temp2 → next, temp1 → next = head and delete temp2.

Deleting from End of the list

We can use the following steps to delete a node from end of the circular linked list...

 Step 1: Check whether list is Empty (head == NULL)


 Step 2: If it is Empty then, display 'List is Empty!!! Deletion is not possible' and
terminate the function.
 Step 3: If it is Not Empty then, define two Node pointers 'temp1' and 'temp2' and
initialize 'temp1' with head.
 Step 4: Check whether list has only one Node (temp1 → next == head)
 Step 5: If it is TRUE. Then, set head = NULL and delete temp1. And terminate from the
function. (Setting Empty list condition)
 Step 6: If it is FALSE. Then, set 'temp2 = temp1 ' and move temp1 to its next node.
Repeat the same until temp1 reaches to the last node in the list. (until temp1 →
next == head)
 Step 7: Set temp2 → next = head and delete temp1.

Deleting a Specific Node from the list

We can use the following steps to delete a specific node from the circular linked list...

 Step 1: Check whether list is Empty (head == NULL)


 Step 2: If it is Empty then, display 'List is Empty!!! Deletion is not possible' and
terminate the function.
 Step 3: If it is Not Empty then, define two Node pointers 'temp1' and 'temp2' and
initialize 'temp1' with head.
 Step 4: Keep moving the temp1 until it reaches to the exact node to be deleted or to the
last node. And every time set 'temp2 = temp1' before moving the 'temp1' to its next node
 Step 5: If it is reached to the last node then display 'Given node not found in the list!
Deletion not possible!!!'. And terminate the function.
 Step 6: If it is reached to the exact node which we want to delete, then check whether list
is having only one node (temp1 → next == head)
 Step 7: If list has only one node and that is the node to be deleted then
set head = NULL and delete temp1 (free(temp1)).
 Step 8: If list contains multiple nodes then check whether temp1 is the first node in the
list (temp1 == head).
 Step 9: If temp1 is the first node then set temp2 = head and keep moving temp2 to its
next node until temp2 reaches to the last node. Then set head = head → next, temp2 →
next = head and delete temp1.
 Step 10: If temp1 is not first node then check whether it is last node in the list (temp1 →
next == head).
 Step 11: If temp1 is last node then set temp2 → next = head and
delete temp1 (free(temp1)).
 Step 12: If temp1 is not first node and not last node then set temp2 → next = temp1 →
next and delete temp1(free(temp1)).

Displaying a circular Linked List

We can use the following steps to display the elements of a circular linked list...

 Step 1: Check whether list is Empty (head == NULL)


 Step 2: If it is Empty, then display 'List is Empty!!!' and terminate the function.
 Step 3: If it is Not Empty then, define a Node pointer 'temp' and initialize with head.
 Step 4: Keep displaying temp → data with an arrow (--->) until temp reaches to the last
node
 Step 5: Finally display temp → data with arrow pointing to head → data.

Double Linked List

from one node


In a single to other
linked node only
list, every nodeinhas
onelink
direction and we
to its next caninnot
node thetraverse back.
sequence. So,We
wecan
cansolve this
traverse

kind of problem by using double linked list. Double linked list can be defined as follows...

Double linked list is a sequence of elements in which every element has links to its previous
element and next element in the sequence.

In double linked list, every node has link to its previous node and next node. So, we can traverse
forward by using next field and can traverse backward by using previous field. Every node in a
double linked list contains three fields and they are shown in the following figure...

Here, 'link1' field is used to store the address of the previous node in the sequence, 'link2' field is
used to store the address of the next node in the sequence and 'data' field is used to store the
actual value of that node.

Example

☀ In double linked list, the first node must be always pointed by head.
☀ Always the previous field of the first node must be NULL.
☀ Always the next field of the last node must be NULL.
Operations

In a double linked list, we perform the following operations...

1. Insertion
2. Deletion
3. Display

Insertion

In a double linked list, the insertion operation can be performed in three ways as follows...

1. Inserting At Beginning of the list


2. Inserting At End of the list
3. Inserting At Specific location in the list

Inserting At Beginning of the list

We can use the following steps to insert a new node at beginning of the double linked list...

 Step 1: Create a newNode with given value and newNode → previous as NULL.
 Step 2: Check whether list is Empty (head == NULL)
 Step 3: If it is Empty then, assign NULL to newNode → next and newNode to head.
 Step 4: If it is not Empty then, assign head to newNode → next and newNode to head.

Inserting At End of the list

We can use the following steps to insert a new node at end of the double linked list...

 Step 1: Create a newNode with given value and newNode → next as NULL.
 Step 2: Check whether list is Empty (head == NULL)
 Step 3: If it is Empty, then assign NULL to newNode →
previous and newNode to head.
 Step 4: If it is not Empty, then, define a node pointer temp and initialize with head.
 Step 5: Keep moving the temp to its next node until it reaches to the last node in the list
(until temp → next is equal to NULL).
 Step 6: Assign newNode to temp → next and temp to newNode → previous.
Inserting At Specific location in the list (After a Node)

We can use the following steps to insert a new node after a node in the double linked list...

 Step 1: Create a newNode with given value.


 Step 2: Check whether list is Empty (head == NULL)
 Step 3: If it is Empty then, assign NULL to newNode → previous & newNode →
next and newNode to head.
 Step 4: If it is not Empty then, define two node pointers temp1 & temp2 and
initialize temp1 with head.
 Step 5: Keep moving the temp1 to its next node until it reaches to the node after which
we want to insert the newNode (until temp1 → data is equal to location, here location is
the node value after which we want to insert the newNode).

 Step 6: Every time check whether temp1 is reached to the last node. If it is reached to the
last node then display 'Given node is not found in the list!!! Insertion not
possible!!!' and terminate the function. Otherwise move the temp1 to next node.
 Step 7: Assign temp1 → next to temp2, newNode to temp1 → next, temp1 to newNode
→ previous, temp2 to newNode → next and newNode to temp2 → previous.

Deletion

In a double linked list, the deletion operation can be performed in three ways as follows...

1. Deleting from Beginning of the list


2. Deleting from End of the list
3. Deleting a Specific Node

Deleting from Beginning of the list

We can use the following steps to delete a node from beginning of the double linked list...

 Step 1: Check whether list is Empty (head == NULL)


 Step 2: If it is Empty then, display 'List is Empty!!! Deletion is not possible' and
terminate the function.
 Step 3: If it is not Empty then, define a Node pointer 'temp' and initialize with head.
 Step 4: Check whether list is having only one node (temp → previous is equal to temp
→ next)
 Step 5: If it is TRUE, then set head to NULL and delete temp (Setting Empty list
conditions)
 Step 6: If it is FALSE, then assign temp → next to head, NULL to head →
previous and delete temp.

Deleting from End of the list

We can use the following steps to delete a node from end of the double linked list...

 Step 1: Check whether list is Empty (head == NULL)


 Step 2: If it is Empty, then display 'List is Empty!!! Deletion is not possible' and
terminate the function.
 Step 3: If it is not Empty then, define a Node pointer 'temp' and initialize with head.
 Step 4: Check whether list has only one Node (temp → previous and temp → next both
are NULL)
 Step 5: If it is TRUE, then assign NULL to head and delete temp. And terminate from
the function. (Setting Empty list condition)

 Step 6: If it is FALSE, then keep moving temp until it reaches to the last node in the list.
(until temp → next is equal to NULL)
 Step 7: Assign NULL to temp → previous → next and delete temp.

Deleting a Specific Node from the list

We can use the following steps to delete a specific node from the double linked list...

 Step 1: Check whether list is Empty (head == NULL)


 Step 2: If it is Empty then, display 'List is Empty!!! Deletion is not possible' and
terminate the function.
 Step 3: If it is not Empty, then define a Node pointer 'temp' and initialize with head.
 Step 4: Keep moving the temp until it reaches to the exact node to be deleted or to the last
node.
 Step 5: If it is reached to the last node, then display 'Given node not found in the list!
Deletion not possible!!!' and terminate the fuction.
 Step 6: If it is reached to the exact node which we want to delete, then check whether list
is having only one node or not
 Step 7: If list has only one node and that is the node which is to be deleted then
set head to NULL and delete temp(free(temp)).
 Step 8: If list contains multiple nodes, then check whether temp is the first node in the list
(temp == head).
 Step 9: If temp is the first node, then move the head to the next node (head = head →
next), set head of previous to NULL(head → previous = NULL) and delete temp.
 Step 10: If temp is not the first node, then check whether it is the last node in the list
(temp → next == NULL).
 Step 11: If temp is the last node then set temp of previous of next to NULL (temp →
previous → next = NULL) and delete temp (free(temp)).
 Step 12: If temp is not the first node and not the last node, then
set temp of previous of next to temp of next (temp → previous → next = temp →
next), temp of next of previous to temp of previous (temp → next → previous = temp
→ previous) and delete temp (free(temp)).

Displaying a Double Linked List

We can use the following steps to display the elements of a double linked list...

 Step 1: Check whether list is Empty (head == NULL)


 Step 2: If it is Empty, then display 'List is Empty!!!' and terminate the function.
 Step 3: If it is not Empty, then define a Node pointer 'temp' and initialize with head.
 Step 4: Display 'NULL <--- '.
 Step 5: Keep displaying temp → data with an arrow (<===>) until temp reaches to the
last node
 Step 6: Finally, display temp → data with arrow pointing to NULL (temp → data --->
NULL).

Stack using Linked List

The major problem with the stack implemented using array is, it works only for fixed number of
data values. That means the amount of data must be specified at the beginning of the
implementation itself. Stack implemented using array is not suitable, when we don't know the size
of data which we are going to use. A stack data structure can be implemented by using linked list
data structure. The stack implemented using linked list can work for unlimited number of values.
That means, stack implemented using linked list works for variable size of data. So, there is no
need to fix the size at the beginning of the implementation. The Stack implemented using linked
list can organize as many data values as we want.
In linked list implementation of a stack, every new element is inserted as 'top' element. That
means every newly inserted element is pointed by 'top'. Whenever we want to remove an element
from the stack, simply remove the node which is pointed by 'top' by moving 'top' to its next node
in the list. The next field of the first element must be always NULL.

Example

In above example, the last inserted node is 99 and the first inserted node is 25. The order of
elements inserted is 25, 32,50 and 99.

Operations

To implement stack using linked list, we need to set the following things before implementing
actual operations.

 Step 1: Include all the header files which are used in the program. And declare all
the user defined functions.
 Step 2: Define a 'Node' structure with two members data and next.
 Step 3: Define a Node pointer 'top' and set it to NULL.
 Step 4: Implement the main method by displaying Menu with list of operations and make
suitable function calls in the mainmethod.

push(value) - Inserting an element into the Stack

We can use the following steps to insert a new node into the stack...

 Step 1: Create a newNode with given value.


 Step 2: Check whether stack is Empty (top == NULL)
 Step 3: If it is Empty, then set newNode → next = NULL.
 Step 4: If it is Not Empty, then set newNode → next = top.
 Step 5: Finally, set top = newNode.

pop() - Deleting an Element from a Stack

We can use the following steps to delete a node from the stack...

 Step 1: Check whether stack is Empty (top == NULL).


 Step 2: If it is Empty, then display "Stack is Empty!!! Deletion is not possible!!!" and
terminate the function
 Step 3: If it is Not Empty, then define a Node pointer 'temp' and set it to 'top'.

 Step 4: Then set 'top = top → next'.


 Step 7: Finally, delete 'temp' (free(temp)).

display() - Displaying stack of elements

We can use the following steps to display the elements (nodes) of a stack...

 Step 1: Check whether stack is Empty (top == NULL).


 Step 2: If it is Empty, then display 'Stack is Empty!!!' and terminate the function.
 Step 3: If it is Not Empty, then define a Node pointer 'temp' and initialize with top.
 Step 4: Display 'temp → data --->' and move it to the next node. Repeat the same
until temp reaches to the first node in the stack (temp → next != NULL).
 Step 4: Finally! Display 'temp → data ---> NULL'.

Queue using Linked List

The major problem with the queue implemented using array is, It will work for only fixed number
of data. That means, the amount of data must be specified in the beginning itself. Queue using
array is not suitable when we don't know the size of data which we are going to use. A queue data
structure can be implemented using linked list data structure. The queue which is implemented
using linked list can work for unlimited number of values. That means, queue using linked list can
work for variable size of data (No need to fix the size at beginning of the implementation). The
Queue implemented using linked list can organize as many data values as we want.

In linked list implementation of a queue, the last inserted node is always pointed by 'rear' and the
first node is always pointed by 'front'.
Example

In above example, the last inserted node is 50 and it is pointed by 'rear' and the first inserted node
is 10 and it is pointed by 'front'. The order of elements inserted is 10, 15, 22 and 50.

Operations

To implement queue using linked list, we need to set the following things before implementing
actual operations.

 Step 1: Include all the header files which are used in the program. And declare all
the user defined functions.
 Step 2: Define a 'Node' structure with two members data and next.
 Step 3: Define two Node pointers 'front' and 'rear' and set both to NULL.
 Step 4: Implement the main method by displaying Menu of list of operations and make
suitable function calls in the mainmethod to perform user selected operation.

enQueue(value) - Inserting an element into the Queue

We can use the following steps to insert a new node into the queue...

 Step 1: Create a newNode with given value and set 'newNode → next' to NULL.
 Step 2: Check whether queue is Empty (rear == NULL)
 Step 3: If it is Empty then, set front = newNode and rear = newNode.
 Step 4: If it is Not Empty then, set rear → next = newNode and rear = newNode.

deQueue() - Deleting an Element from Queue

We can use the following steps to delete a node from the queue...

 Step 1: Check whether queue is Empty (front == NULL).


 Step 2: If it is Empty, then display "Queue is Empty!!! Deletion is not possible!!!" and
terminate from the function
 Step 3: If it is Not Empty then, define a Node pointer 'temp' and set it to 'front'.
 Step 4: Then set 'front = front → next' and delete 'temp' (free(temp)).

display() - Displaying the elements of Queue

We can use the following steps to display the elements (nodes) of a queue...
 Step 1: Check whether queue is Empty (front == NULL).

 Step 2: If it is Empty then, display 'Queue is Empty!!!' and terminate the function.
 Step 3: If it is Not Empty then, define a Node pointer 'temp' and initialize with front.
 Step 4: Display 'temp → data --->' and move it to the next node. Repeat the same until
'temp' reaches to 'rear' (temp → next != NULL).
 Step 4: Finally! Display 'temp → data ---> NULL'.

APPLICATIONS

 To store a set of programs which are to be given access to a hard disk according to
their priority.
b) For representing a city region telephone network.
c) To store a set of fixed key words which are referenced very frequently.
d) To represent an image in the form of a bitmap.
e) To implement back functionality in the internet browser.
f) To store dynamically growing data which is accessed very frequently, based upon a
key value.
g) To implement printer spooler so that jobs can be printed in the order of their
arrival.
h) To record the sequence of all the pages browsed in one session.
i) To implement the undo function in a text editor.
j) To store information about the directories and files in a system.

Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before implementation and
after implementation. They are the following −
 A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency of an
algorithm is measured by assuming that all other factors, for example, processor speed,
are constant and have no effect on the implementation.
 A Posterior Analysis − This is an empirical analysis of an algorithm. The selected
algorithm is implemented using programming language. This is then executed on target
computer machine. In this analysis, actual statistics like running time and space required,
are collected.
We shall learn about a priori algorithm analysis. Algorithm analysis deals with the execution or
running time of various operations involved. The running time of an operation can be defined as
the number of computer instructions executed per operation.

Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space used by the
algorithm X are the two main factors, which decide the efficiency of X.
 Time Factor − Time is measured by counting the number of key operations such as
comparisons in the sorting algorithm.
 Space Factor − Space is measured by counting the maximum memory space required by
the algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage space required by
the algorithm in terms of n as the size of input data.
Space Complexity
Space complexity of an algorithm represents the amount of memory space required by the
algorithm in its life cycle. The space required by an algorithm is equal to the sum of the
following two components −
 A fixed part that is a space required to store certain data and variables, that are
independent of the size of the problem. For example, simple variables and constants used,
program size, etc.
 A variable part is a space required by variables, whose size depends on the size of the
problem. For example, dynamic memory allocation, recursion stack space, etc.
Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed part and S(I)
is the variable part of the algorithm, which depends on instance characteristic I. Following is a
simple example that tries to explain the concept −

Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C ← A + B + 10
Step 3 - Stop
Here we have three variables A, B, and C and one constant. Hence S(P) = 1 + 3. Now, space
depends on data types of given variables and constant types and it will be multiplied accordingly.

Time Complexity
Time complexity of an algorithm represents the amount of time required by the algorithm to run
to completion. Time requirements can be defined as a numerical function T(n), where T(n) can
be measured as the number of steps, provided each step consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the total computational
time is T(n) = c ∗ n, where c is the time taken for the addition of two bits. Here, we observe that
T(n) grows linearly as the input size increases.

Asymptotic analysis of an algorithm refers to defining the mathematical boundation/framing of its


run-time performance. Using asymptotic analysis, we can very well conclude the best case,
average case, and worst case scenario of an algorithm.

Asymptotic analysis is input bound i.e., if there's no input to the algorithm, it is concluded to
work in a constant time. Other than the "input" all other factors are considered constant.

Asymptotic analysis refers to computing the running time of any operation in mathematical units
of computation. For example, the running time of one operation is computed as f(n) and may be
for another operation it is computed as g(n2). This means the first operation running time will
increase linearly with the increase in n and the running time of the second operation will increase
exponentially when n increases. Similarly, the running time of both operations will be nearly the
same if n is significantly small.

Usually, the time required by an algorithm falls under three types −

Best Case − Minimum time required for program execution.

Average Case − Average time required for program execution.

Worst Case − Maximum time required for program execution.


Asymptotic Notations

Following are the commonly used asymptotic notations to calculate the running time complexity
of an algorithm.

Ο Notation

Ω Notation

θ Notation

Big Oh Notation, Ο

The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time. It
measures the worst case time complexity or the longest amount of time an algorithm can possibly
take to complete.

For example, for a function f(n)

Ο(f(n)) = { g(n) : there exists c > 0 and n0 such that f(n) ≤ c.g(n) for all n > n0. }

Omega Notation, Ω

The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time. It
measures the best case time complexity or the best amount of time an algorithm can possibly take
to complete.
For example, for a function f(n)
Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }

Theta Notation, θ

The notation θ(n) is the formal way to express both the lower bound and the upper bound of an
algorithm's running time. It is represented as follows −

θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }

Common Asymptotic Notations

Following is a list of some common asymptotic notations –

Data Structures - Asymptotic Analysis


constant − Ο(1)

logarithmic − Ο(log n)

linear − Ο(n)

n log n − Ο(n log n)

quadratic − Ο(n2)

cubic − Ο(n3)

polynomial − nΟ(1)

exponential − 2Ο(n)

Time complexity :

Big O notation
f(n) = O(g(n)) means
There are positive constants c and k such that:
0<= f(n) <= c*g(n) for all n >= k.

For large problem sizes the dominant term(one with highest value of exponent) almost completely
determines the value of the complexity expression. So abstract complexity is expressed in terms
of the dominant term for large N. Multiplicative constants are also ignored.

N^2 + 3N + 4 is O(N^2)
since for N>4, N^2 + 3N + 4 < 2N^2 (c=2 & k=4)

O(1)- constant time


This means that the algorithm requires the same fixed number of steps regardless of the size of
the
task.
Example:
1) a statement involving basic operations
Here are some examples of basic operations.
-one arithmetic operation (eg., +, *)
-one assignment
-one test (eg., x==0)
-one read(accessing an element from an array)

2) Sequence of statements involving basic


operations. statement 1;
statement 2;
..........
statement k;
Time for each statement is constant and the total time is also constant: O(1)

O(n)- linear time


This means that the algorithm requires a number of steps proportional to the size of the task.

Examples:
1. Traversing an array.
2. Sequential/Linear search in an array.
3. Best case time complexity of Bubble sort (i.e when the elements of array are in sorted order).

Basic strucure is :
for (i = 0; i < N; i++) {
sequence of statements of O(1)
}

The loop executes N times, so the total time is N*O(1) which is O(N).

O(n^2) - quadratic time


The number of operations is proportional to the size of the task squared.
Examples:
1) Worst case time complexity of Bubble, Selection and Insertion sort.

Nested loops:
1.
for (i = 0; i < N; i++)
{ for (j = 0; j < M; j++)
{
sequence of statements of O(1)
}
}
The outer loop executes N times and inner loop executes M times so the time complexity is
O(N*M)
2.
for (i = 0; i < N; i++)
{ for (j = 0; j < N; j++)
{
sequence of statements of O(1)
}
}
Now the time complexity is O(N^2)

3. let's consider nested loops where the number of iterations of the inner loop depends on the
value of the outer loop's index.

for (i = 0; i < N; i++) {


for (j = i+1; j < N; j++)
{ sequence of statements of
O(1)
}
}

Let us see how many iterations the inner loop has:


Value of i Number of iterations of inner loop
0 N-1
1 N-2
..... .......
N-3 2
N-2 1
N-1 0
So the total number of times the “sequence of statements” within the two loops executes is :
(N-1)+(N-2)+.....2+1+0 which is N*(N-1)/2 or (1/2)*(N^2) - (1/2)* N
and we can say that it is O(N^2) (we can ignore multiplicative constant and for large problem
size the dominant term determines the time complexity)

O(log n) - logarithmic time


Examples:
1. Binary search in a sorted array of n elements.

O(n log n) - "n log n " time

Examples:
1. MergeSort, QuickSort etc.

O(a^n)(a>1)- exponential time


Examples:
1. Recursive Fibonacci implementation
2. Towers of Hanoi
The best time in the above list is obviously constant time, and the worst is exponential time
which, as we have seen, quickly overwhelms even the fastest computers even for relatively small
n. Polynomial growth (linear, quadratic, cubic, etc.) is considered manageable as compared to
exponential growth.

Using the "<" sign informally, we can say that the order of growth is
O(l) < O(log n) < O(n) < O(n log n) < O(n^2) < O(n^3) < O(a^n) where a>1

A word about Big-O when a function is the sum of several terms


If a function (which describes the order of growth of an algorithm) is a sum of several terms, its
order of growth is determined by the fastest growing term. In particular, if we have a
polynomial
p(n) = aknk + ak-1nk-1 +......+a1n + a0

its growth is of the order nk:


p(n) = O(nk)

Example:

{ O(1)
perform any statement S1}
for (i=0; i < n; i++) {

{perform any statement(s) S2} O(n)

{run through another loop n O(n^2)


times}
}

Total Execution Time: O(1) + O(n) therefore, O(n^2)


+O(n^2)
Statements with method calls:
When a statement involves a method call, the complexity of the statement includes the complexity
of the method call. Assume that you know that method f takes constant time, and that method g
takes time proportional to (linear in) the value of its parameter k. Then the statements below have
the time complexities indicated.
f(k); // O(1)
g(k); // O(k)

When a loop is involved, the same rule applies. For example:


for (j = 0; j < N; j++) g(N);

has complexity (N2). The loop executes N times and each method call g(N) is complexity O(N).

Other Examples

1. for (j = 0; j < N; j++) f(j);


This is O(N) since f(j) is O(1) and it is executed N times.

2. for (j = 0; j < N; j++) g(j);


The first time the loop executes j is 0 and g(0) takes "no operations". The next time j is 1
and g(1) takes 1 operations. The last time the loop executes j is N-1 and g(N-1) takes N-1
operations. The total time is the sum of the first N-1 numbers and is O(N2).
3. for (j = 0; j < N; j++) g(k);
Each time through the loop g(k) takes k operations and the loop executes N times. Since
you don't know the relative size of k and N, the overall complexity is O(N * k).

So why should we bother about time complexity?


Suppose time taken by one operation=1 micro sec.
Problem size N=100
Complexity Time
N 1 micro sec.
N^2 0.01 sec.
N^4 100 sec.
2^N 4x10^16 years

Lets look at the problem for computing Fibonacci numbers :

A recursive solution:
public long Fib1(long n){
if ((n == 1 )||(n==2)) return 1;
return Fib1(n-1) + Fib1(n-2);
}

Running Time Analysis:


Let T(n) be the number of steps needed to compute F(n) (nth Fibonacci
number) We can see that
T(n) = T(n-1) + T(n-2) + .....................................................(i)
1 T(1) = T(2) = 1
T(n) is exponential in n. It takes approximately 2^(0.7n) steps to compute F(n). The proof is out of
the scope of this course. F(200) will take about 2^(140) steps which is even more than the life of
universe!!!!

What takes so long?


The same subproblems get solved over and over again!
F(5)
↙ ↘
F(4) F(3)
↙↘ ↙↘
F(3) F(2) F(2) F(1)
↙↘
F(2) F(1)

A better solution:
Solve F1, F2, ..., Fn. Solve them in order and save their values!

Function Fib2(n){
Create an array fib[1....n]
fib[1] = 1
fib[2] = 1
for i = 3 to n:
fib[i] = fib[i-1] + fib[i-2]
return fib[n]
}

The time complexity of this algorithm is O(n). The number of steps required is proportional to n.
F(200) is now reasonable so is F(2000) and F(20000).

Moral: The right algorithm makes all the difference

Some Important Recurrence Relations :


Recurrence Algorithm Big-Oh Solution
T(n) = T(n/2) + O(1) Binary Search O(log n)
T(n) = T(n-1) + O(1) Sequential Search O(n)
T(n) = T(n-1) + O(n) Selection Sort (other n2 sorts in worst case) O(n2)
T(n) = 2*T(n/2) + O(n) Merge Sort & Quicksort O(n log n)

Tower of Hanoi
The Tower of Hanoi is a mathematical puzzle. It consists of three rods, and a number of disks of
different sizes which can slide onto any rod. The puzzle starts with the disks neatly stacked in
order of size on one rod, the smallest at the top, thus making a conical shape.
The objective of the puzzle is to move the entire stack to another rod, obeying the following rules:
• Only one disk may be moved at a time.
• another rod,
Each
on topmove
of theconsists of taking
other disks the upper
that may disk
already be from oneonofthat
present the rod.
rodsNo
anddisk
sliding
mayit onto
be placed on top of a smaller disk.

We want to write a recursive method, THanoi(n,A,B,C) which moves n disks from peg A to peg C
using peg B for intermediate transfers.

Observation: THanoi(n, A, B, C) is equivalent to performing following tasks:


THanoi(n-1, A, C, B) (which means moving n-1 disks from A to B using C)
Transferring the largest disk from peg A to peg C
THanoi(n-1, B, A, C) (which means moving n-1 disks from B to C using A)

Stopping condition: n = 1

The time complexity of above algorithm can be determined using following recurrence relation.
Let T(n) be the number of steps required to solve the puzzle for n disks. It is clearly evident from
the above observation that the soluiton for n disks is equivalent to- solving the puzzle two times for
n-1 disks and a single step involving transfer of disk from starting 'peg' to final 'peg' which takes
constant time.
Thus,
T(n) = T(n-1) + T(n-1) + O(1)= 2*T(n-1) + O(1)
The solution to this recurrence relation is exponential in n and so T(n) is of exponential order. The
proof is out of scope of this course.

This is an example where recursion is much easier to formulate than a loop based solution.
Polynomial vs. Exponential Running Time:

The time complexity(generally referred as running time) of an algorithm is expressed as the amount of
time taken by an algorithm for some size of the input to the problem. Big O notation is commonly
used to express the time complexity of any algorithm as this suppresses the lower order terms and is
described asymptotically. Time complexity is estimated by counting the operations(provided as
instructions in a program) performed in an algorithm. Here each operation takes a fixed amount of time in
execution. Generally time complexities are classified as constant, linear, logarithmic, polynomial,
exponential etc. Among these the polynomial and exponential are the most prominently
considered and defines the complexity of an algorithm. These two parameters for any algorithm are
always influenced by size of input.

Polynomial Running Time

An algorithm is said to be solvable in polynomial time if the number of steps required to complete the
algorithm for a given input is O(nk) for some non-negative integer k, where n is the complexity of the
input. Polynomial-time algorithms are said to be "fast." Most familiar mathematical operations such as
addition, subtraction, multiplication, and division, as well as computing square roots, powers, and
logarithms, can be performed in polynomial time. Computing the digits of most interesting mathematical
constants, including pi and e, can also be done in polynomial time.
All basic arithmetic operations ((i.e.) Addition, subtraction, multiplication,division), comparison
operations, sort operations are considered as polynomial time algorithms.

Exponential Running Time:

The set of problems which can be solved by an exponential time algorithms, but for which no
polynomial time algorithms is known. An algorithm is said to be exponential time, if T(n) is upper
bounded by 2poly(n), where poly(n) is some polynomial in n. More formally, an algorithm is exponential
time if T(n) is bounded by O(2nk) for some constant k.

Algorithms which have exponential time complexity grow much faster than polynomial algorithms.
The difference you are probably looking for happens to be where the variable is in the equation that
expresses the run time. Equations that show a polynomial time complexity have variables in the bases of
their terms. Examples: n3 + 2n2+ 1. Notice n is in the base, NOT the exponent. In exponential equations,
the variable is in the exponent. Examples: 2n. As said before, exponential time grows much faster. If n is
equal to 1000 (a reasonable input for an algorithm), then notice 10003 is 1 billion, and 21000 is simply

huge! For a reference, there are about 280 hydrogen atoms in the sun, this is much more than 1 billion.
Average, Best, and Worst Case Complexities:

We can have three cases to analyze an algorithm:


1) Worst Case
2) Average Case
3) Best Case

Worst Case:
In the worst case analysis, we calculate upper bound on running time of an algorithm. We must know the
case that causes maximum number of operations to be executed. For Linear Search, the worst case
happens when the element to be searched (x in the above code) is not present in the array. When x is not
present, the search() functions compares it with all the elements of arr[] one by one. Therefore, the worst
case time complexity of linear search would be Θ(n).

Average Case:
In average case analysis, we take all possible inputs and calculate computing time for all of the inputs.
Sum all the calculated values and divide the sum by total number of inputs. We must know (or predict)
distribution of cases. For the linear search problem, let us assume that all cases are uniformly
distributed (including the case of x not being present in array). So we sum all the cases and divide the
sum by (n+1). Following is the value of average case time complexity.

Average case time=

=
= Θ(n)

Best Case:

In the best case analysis, we calculate lower bound on running time of an algorithm. We must know the
case that causes minimum number of operations to be executed. In the linear search problem, the best
case occurs when x is present at the first location. The number of operations in the best case is constant
(not dependent on n). So time complexity in the best case would be Θ(1).

Most of the times, we do worst case analysis to analyze algorithms. In the worst analysis, we guarantee
an upper bound on the running time of an algorithm which is good information.
The average case analysis is not easy to do in most of the practical cases and it is rarely done. In the
average case analysis, we must know (or predict) the mathematical distribution of all possibleinputs.
The Best Case analysis is bogus. Guaranteeing a lower bound on an algorithm doesn’t provide any
information as in the worst case, an algorithm may take years to run.
For some algorithms, all the cases are asymptotically same, i.e., there are no worst and best cases.
For example, Merge Sort. Merge Sort does Θ(nLogn) operations in all cases. Most of the other
sorting algorithms have worst and best cases. For example, in the typical implementation of Quick Sort
(where pivot is chosen as a corner element), the worst occurs when the input array is already sorted and
the best occur when the pivot elements always divide array in two halves. For insertion sort, the worst
case occurs when the array is reverse sorted and the best case occurs when the array is sorted in the same
order as output.

Analysis of Recursive Algorithms

Example: Factorial
n! = 1•2•3...n and 0! = 1 (called initial case)
So the recursive defintiion n! = n•(n-1)!

Algorithm F(n)
if n = 0 then return 1 // base case
else F(n-1)•n // recursive call

Basic operation? multiplication during the recursive call


Formula for multiplication
M(n) = M(n-1) + 1
is a recursive formula too. This is typical.

We need the initial case which corresponds to the base case


M(0) = 0
There are no multiplications

Solve by the method of backward substitutions


M(n) = M(n-1) + 1
= [M(n-2) + 1] + 1 = M(n-2) + 2 substituted M(n-2) for M(n-1)
= [M(n-3) + 1] + 2 = M(n-3) + 3 substituted M(n-3) for M(n-2)
... a pattern evolves
= M(0) + n
=n
Not surprising!
Therefore M(n) ε Θ(n)
Procedure for Recursive Algorithm
1. Specify problem size
2. Identify basic operation
3. Worst, best, average case
4. Write recursive relation for the number of basic operation. Don't forget the initial conditions (IC)
5. Solve recursive relation and order of

growth Stop here?

Example: Tower Hanoi


Explain the problem using figure

Demo and show recursion

1. Problem size is n, the number of discs


2. The basic operation is moving a disc from rod to another
3. There is no worst or best case
4. Recursive relation for moving n discs
M(n) = M(n-1) + 1 + M(n-1) = 2M(n-1) + 1
IC: M(1) = 1
5. Solve using backward substitution
M(n) = 2M(n-1) + 1
= 2[2M(n-2) + 1] +1 = 22M(n-2) + 2+1
=22[2M(n-3) +1] + 2+1 = 23M(n-3) + 22 + 2 + 1
...
M(n) = 2iM(n-i) + ∑j=0-i2j = 2iM(n-i) + 2i-1
...
M(n) = 2n-1M(n-(n-1)) + 2n-1-1 = 2n-1M(1) + 2n-1-1 = 2n-1 + 2n-1-1 = 2n-1

M(n) ε Θ(2n)

Terrible. Can we do it

better?

Where did the exponential term come from? Because two recursive calls are made. Suppose three
recursive calls are made, what is the order of growth.

Lesson learned: Be careful of the recursive algorithm, they can grow exponential.

Especial if the problem size is measured by the level of the recursive tree and the operation count is total
number of nodes.
Example: Binary Representation
Algorithm BinRec(n)
if n = 1 then return 1
else return BinRec(floor(n/2)) + 1

1. Problem size is n
2. Basic operation is the addition in the recursive call
3. There is no difference between worst and best case
4. Recursive relation including initial conditions
A(n) = A(floor(n/2)) + 1
IC A(1) = 0
5. Solve recursive relation
The division and floor function in the argument of the recursive call makes the analysis difficult.
We could make the variable substitution, n = 2k, could get rid of the definition,
but the substitution skips a lot of values for n.
The smoothness rule (see appendix B) says that is ok.

Smoothness rule
T(n) eventually non-decreasing and f(n) be smooth {eventually non-decreasing and f(2n) ε Θ(f(n))}
if T(n) ε Θ(f(n)) for n powers of b then T(n) ε Θ(f(n)) for all n.

Works for O and Ω.

substitute n = 2k (also k = lg(n))


A(2k) = A(2k-1) + 1 and IC A(20) = 0

A(2k) = [A(2k-2) + 1] + 1 = A(2k-2) + 2


= [A(2k-3) + 1] + 2 = A(2k-3) + 3
...
= A(2k-i) + i
...
= A(2k-k) + k
A(2k) = k

Substitute back k = lg(n)


A(n) = lg(n) ε Θ(lg n)
Example: Fibonacci Numbers Sequence
A classic example for more elaborate recursion relations.

Fibonacci Sequence: 0, 1, 1 2, 3, 5, 8, 13, 21, 34, ...


Fibonacci (1202) proposed for the growth of rabbits
Can be defined by the simple recurrence
F(n) = F(n-1) + F(n-2) for n > 1
IC: F(0) = 0 and F(1) = 1 (why do we need two ICs)

Homogenous second-order linear recurrence with constant coefficients


ax(n) + bx(n-1) + cx(n-2) = 0
Homogenous because the recurrence equals zero.
Why second-order linear? Substitute the proposed solution x(n) = rn
a rn + b rn-1 + c rn-2 = 0
divide by rn
a + b r + c r2 = 0, characteristic equation is a second order polynomial.
The real roots are solutions
x(n) = αr1n + βr 2n
α and β are determined from the

Apply to Fibonacci Recursion


F(n) - F(n-1) - F(n-2) = 0, homogenous second order with constant coefficients
r2 - r - 1 = 0, characteristic equation
r1,2 = (1 ± √5)/2 = φ or φ' { φ = (1+√5)/2 and φ' = (1-√5)/2}
The general form of the solution
F(n) = αφn + βφ' n where α and β are unknows
Using the ICs
α + β =0
φα + φ'β = 1
Solve by substituting the first equation into the second, get α = 1/√5 and β = -1/√5
So
F(n) = 1/√5 (φn - φ' n) = φn/√5 rounded to the nearest integer

Example: Recursive Algorithm for Fibonacci Numbers


Algorithm F(n)
if n ≤ 1 then return n
else return F(n-1) + F(n-2)
1. Problem size is n, the sequence number for the Fibonacci number
2. Basic operation is the sum in recursive call
3. No difference between worst and best case
4. Recurrence relation
A(n) = A(n-1) + A(n-2) + 1
IC: A(0) = A(1) = 0
or
A(n) - A(n-1) - A(n-2) = 1, Inhomogeneous recurrences because of the 1

In general solution to the inhomogeneous problem is equal to the sum of solution to homogenous
problem plus solution only to the inhomogeneous part. The undetermined coefficients of the solution for
the homogenous problem are used to satisfy the IC.

In this case A(n) = B(n) + I(n) where


A(n) is solution to complete inhomogeneous problem
B(n) is solution to homogeneous problem
I(n) solution to only the inhomogeneous part of the problem

We guess at I(n) and then determine the new IC for the homogenous problem for B(n)

For this problem the correct guess is I(n) = 1

substitute A(n) = B(n) -1 into the recursion and get

B(n) - B(n-1) - B(n-2) = 0 with IC B(0) = B(1) = 1

The same as the relation for F(n) with different IC


We do not really need the exact solution; We can conclude
A(n) = B(n) -1 = F(n+1) - 1 ε Θ(φn), exponential

There is the Master Theorem that give the asymptotic limit for many common problems.

Iterative algorithm

Algorithm Fib(n)
F[1] ← 0; F[1] ← 1
for i ← 2 to n do
F[i] ← F[i-1] + F[i-2]
return F[n]

Order of growth is Θ(n).


Why is it so much better then the recursive algorithm? Draw the recursive tree.

You might also like