0% found this document useful (0 votes)
164 views321 pages

All Unit

The document provides an overview of data structures, focusing on Abstract Data Types (ADTs) and the List ADT, including their implementations through arrays and linked lists. It discusses the characteristics of primitive and non-primitive data structures, along with the advantages of modular programming. Additionally, it details operations for managing lists, including insertion, deletion, and searching, while highlighting the limitations of array-based implementations compared to linked lists.

Uploaded by

rmenagadevi.it
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
164 views321 pages

All Unit

The document provides an overview of data structures, focusing on Abstract Data Types (ADTs) and the List ADT, including their implementations through arrays and linked lists. It discusses the characteristics of primitive and non-primitive data structures, along with the advantages of modular programming. Additionally, it details operations for managing lists, including insertion, deletion, and searching, while highlighting the limitations of array-based implementations compared to linked lists.

Uploaded by

rmenagadevi.it
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 321

UNIT - I

LIST

Abstract Data Types (ADTs) – List ADT – Array- based implementation – Linked list
implementation – Singly linked lists – Circularly linked lists – Doubly-linked lists –
Applications of lists – Polynomial ADT – Radix Sort – Multilists.

Introduction to Data Structures


A data structure is a particular way of organising data in a computer so that it can
be used effectively. The idea is to reduce the space and time complexities of different tasks.
A data structure is a specialized format for organizing, processing, retrieving
and storing data. There are several basic and advanced types of data structures, all
designed to arrange data to suit a specific purpose.

Types of Data Structure


The data structure can be divided into two basic types. They are,
 Primitive data structure
 Non-primitive data structure

 Primitive data structure is a fundamental type of data structure that stores the data
of only one type.

1
 Non-Primitive data structure is a type of data structure which is a user-defined that
stores the data of different types in a single entity.It is further classified into two types.

 They are:
1. Linear Data Structures.
2. Non- Linear Data Structures.

 Linear Data Structures: It is a data structure which contains a linear arrangement


of elements in thememory.
 Non-Linear Data Structure: It is a data structure which represents a hierarchical
arrangement ofelements.

1.1 ABSTRACT DATA TYPES (ADTs)

An abstract data type is defined only by the operations that may be performed on it
and by mathematical pre-conditions and constraints on the effects (and possibly cost) of
those operations.

For example, an abstract stack, which is a last-in-first-out structure, could be defined


by three operations: push, that inserts some data item onto the structure, pop, that extracts
an item from it, and peek or top, that allows data on top of the structure to be examined
without removal.

An abstract queue data structure, which is a first-in-first-out structure, would also have
three operations, enqueue to join the queue; dequeue, to remove the first element from the
queue; and front,in order to access and serve the first element in the queue.

An ADT may be implemented by specific data types or data structures, in many ways
and in many programming languages; or described in a formal specification language. ADTs
are often implemented as modules: the module's interface declares procedures that
correspond to the ADT operations, sometimes with comments that describe the constraints.
This information hiding strategy allows the implementation of the module to be changed
without disturbing the client programs.

2
Defining an abstract data type (ADT)

An abstract data type is defined as a mathematical model of the data objects that
make up a data type as well as the functions that operate on these objects. There are no
standard conventions for defining them. A broad division may be drawn between "imperative"
and "functional" definition styles.

Modules:

The programs can be spit in to smaller sub programs called modules. Each module is
a logical unit and does a specific job or process. The size of the module is kept small by
calling other modules.

Modularity has several advantages:


 Modules can be compiled separately which makes debugging process easier.
 Several modules can be implemented and executed simultaneously.
 Modules can be easily enhanced.
An ADT is a set of operations and is an extension of modular design. A useful tool for
specifying the logical properties of a data type is the abstract data type. ADT refers to the
basic mathematical concept that defines the data type. Eg. Objects such as list, set and
graph along their operations can be viewed as ADT's. (or) Abstract Data Type is some data,
associated with a set of operations and mathematical abstractions just as integer, Boolean
etc.,
 Basic idea
1. Write once, use many.
2. Any changes are transparent to the other modules.

DESIRABLE PROPERTIES OF AN EFFECTIVE ALGORITHM:


 It should be clear / complete and definite.
 It should be efficient.
 It should be concise and compact.
 It should be less time consuming.
 Effective memory utilization.

3
APPLICATION OF DATA STRUCTURES:
Some of the applications of data structures are:

 Operating Systems
 Compiler Design
 Statistical And Numerical Analysis
 Database Management Systems
 Expert Systems
 Network Analysis

1.2 LIST ADT:

A. list is a dynamic data structure. The no. of nodes on a list may vary dramatically
as elements are inserted and removed. The dynamic nature of list is implemented using
linked list and the static natureof list is implemented using array.

In general the list is in the form of elements A1, A2, …, AN, where N is the size of
the list associatedwith a set of operations:

 Insert: add an element e.g. Insert(X,5)-Insert the element X after the position 5.
 Delete: remove an element e.g. Delete(X)-The element X is deleted.
 Find: find the position of an element (search) e.g. Find(X)-Returns the position of X
 FindKth: find the kth element e.g. Find(L,6)- Returns the element present in the given
positionof list L.
 PrintList: Display all the elements from the list.
 MakeEmpty: Make the list as empty list.

IMPLEMENTATION OF LIST:
There are two methods
1. Array
2. Linked list

4
1.3 ARRAY IMPLEMENTATION OF LIST:

Array: A set of data elements of same data type is called array. Array is a static data
structure i.e., thememory should be allocated in advance and the size is fixed. This will waste
the memory space when used space is less than the allocated space. An array
implementation allows the following operations.
The basic operations are:

a. Creation of a List.
b. Insertion of a data in the List
c. Deletion of a data from the List
d. Searching of a data in the list

Global Declaration:

int list[25], index=-1;

Note: The initial value of index is -1.

Create Operation:

To create an array, define the data type (like int ) and specify the name of the array
followed by square brackets []. To insert values to it, use a comma-separated list, inside curly
braces: int myNumbers[] = {25, 50, 75, 100}; We have now created a variable that holds an
array of four integers

Procedure:

void create()
{
int n,i;
printf("\nEnter the no.of elements to be added in the list");s.canf("%d",&n);

if(n<=maxsize) for(i=0;i<n;i++)
{

5
scanf("%d",&list[i]);index++;
}
else
printf("\nThe size is limited. You cannot add data into the list");
}
Explanation:

 The list is initially created with a set of elements.


 Get the no. of elements (n) to be added in the list.
 Check n is less than or equal to maximum size. If yes, add the elements to the list.
 Otherwise, give an error message.

Insert Operation:

This operation is performed to insert one or more elements into the array. As per the
requirements, an element can be added at the beginning, end, or at any index of the array.

Procedure:

void insert()
{
int i,data,pos;
printf("\nEnter the data to be inserted");scanf("%d",&data);
printf("\nEnter the position at which element to be inserted");scanf("%d",&pos);
if(index<maxsize)

6
{
for(i=index;i>=pos-1;i--)list[i+1]=list[i]; index++;

list[pos-1]=data;
}
else
printf("\nThe list is full");
}
Explanation:
 Get the data element to be inserted.
 Get the position at which element is to be inserted.
 If index is less than or equal to maxsize, then Make that position empty by altering
the positionof the elements in the list.
 Insert the element in the poistion.
 Otherwise, it implies that the list is empty.

Deletion Operation:

To delete a specific element from an array, a user must define the position from which the
array's element should be removed. The deletion of the element does not affect the size of an
array.

Procedure:

void del()
{
int i,pos;
printf("\nEnter the position of the data to be deleted");

7
scanf("%d",&pos);
printf("\nThe data deleted is %d",list[pos-1]);
for(i=pos;i<=index;i++)
list[i-1]=list[i];index--;
}
Explanation:
 Get the position of the element to be deleted.
 Alter the position of the elements by performing an assignment operation, list[i-
1]=list[i],where i

value ranges from position to the last index of the array.

Display Operation:

Procedure:

void display()
{
int i; for(i=0;i<=index;i++)
printf("\t%d",list[i]);
}
Explanation:
 Formulate a loop, where i value ranges from 0 to index (index denotes the index of the
lastelement in the array.
 Display each element in the array.

Limitation of array implementation


 An array size is fixed at the start of execution and can store only the limited number of
elements.
 Insertion and deletion of array are expensive. Since insertion is performed by pushing
the entirearray one position down and deletion is performed by shifting the entire array
one position up.
A better approach is to use a Linked List.

8
1.4 LINKED DATA STRUCTURE:

Linked data structure is a data structure which consists of a set of data records
(nodes) linked together and organized by references (links or pointers). The link between
data can also be called a connector.
In linked data structures, the links are usually treated as special data types that can
only be dereferenced or compared for equality. Linked data structures are thus contrasted
with arrays and other data structures that require performing arithmetic operations on
pointers. This distinction holds even when the nodes are actually implemented as elements
of a single arrayl
Linking can be done in two ways - Using dynamic allocation and using array index
linking.

Linked lists
A linked list is a collection of structures ordered not by their physical placement in
memory but by logical links that are stored as part of the data in the structure itself. It is not
necessary that it should be stored in the adjacent memory locations. Every structure has a
data field and an address
field. The Address field contains the address of its successor.
A list is a linear structure. Each item except the first (front, head) has a unique
predecessor. Eachitem except the last (end, tail) has a unique successor. First item has no
predecessor, and last item has no successor. An item within a list is specified by its position
in the list.
Linked list can be singly, doubly or multiply linked and can either be linear or
circular.

1.4.1 SINGLY LINKED LIST:

Singly linked list is the most basic linked data structure. In this the elements can be
placed anywhere in the heap memory unlike array which uses contiguous locations. Nodes in
a linked list are linked together using a next field, which stores the address of the next node
in the next field of the previous node i.e. each node of the list refers to its successor and the

9
last node contains the NULL reference. It has a dynamic size, which can be determined only
at run time.

Null 10 40 20 30

Basic operations of a singly-linked list are:

1. Insert – Inserts a new element at the end of the list.


2. Delete – Deletes any node from the list.
3. Find – Finds any node in the list.
4. Print – Prints the list.

Algorithm:

. 1. The node of a linked list is a structure with fields data (which stored the value of
the node) and *next (which is a pointer of type node that stores the address of the next node).
2. Two nodes *start (which always points to the first node of the linked list) and *temp
(which isused to point to the last node of the linked list) are initialized.
3. Initially temp = start and temp->next = NULL. Here, we take the first node as a
dummy node. The first node does not contain data, but it used because to avoid handling
special cases in insert anddelete functions.

Functions –

1. Insert – This function takes the start node and data to be inserted as arguments.
New node is inserted at the end so, iterate through the list till we encounter the last
node. Then, allocate memory for the new node and put data in it. Lastly, store the
address in the next field of the newnode as NULL.

2. Delete - This function takes the start node (as pointer) and data to be deleted
arguments. Firstly, go to the node for which the node next to it has to be deleted, If
that node points to NULL (i.e. pointer->next=NULL) then the element to be deleted

10
is not present in the list.Else, now pointer points to a node and the node next to it has
to be removed, declare a temporary node (temp) which points to the node which
has to be removed.
Store the address of the node next to the temporary node in the next field of
the node pointer (pointer->next = temp->next). Thus, by breaking the link we
removed the node which is next to the pointer (which is also temp). Because we
deleted the node, we no longer require the memory used for it, free() will deallocate
the memory.

3. Find - This function takes the start node (as pointer) and data value of the node
(key) to be found as arguments. First node is dummy node so, start with the second
node. Iterate through the entire linked list and search for the key.Until next field of
the pointer is equal to NULL, check if pointer->data = key. If it is then the key is
found else, move to the next node and search (pointer = pointer -> next). If key is
not found return 0,else return 1.
4. Print - function takes the start node (as pointer) as an argument. If pointer = NULL,
then there is no element in the list. Else, print the data value of the node (pointer-
>data) and move to the next node by recursively calling the print function with
pointer->next sent as an argument.

Performance:

1. The advantage of a singly linked list is its ability to expand to accept virtually unlimited
number of nodes in a fragmented memory environment.

2. The disadvantage is its speed. Operations in a singly-linked list are slow as it uses
sequential search to locate a node.

Declaration of Linked List

void insert(int X,List L,position P);void find(List L,int X);


void delete(int x , List L);
typedef struct node *position;position L,p,newnode; struct

node

11
{
int data; position next;
};

Routine to insert an element in list

Void insert(int X,List L,position p)


{
position newnode;
newnode=malloc(sizeof(struct
node));If(newnode==NULL)
Factal error(“Out of Space”);
else
{
newnode->data=x;
newnode-next=p-
next;p-
>next=newnode;
}
}

Routine to Insert an Element in the List:

Null

Insert (Start,10) - A new node with data 10 is inserted and the next field is updated to
NULL. Thenext field of previous node is updated to store the address of new node.

Null 10

12
L

Insert (Start,20) - A new node with data 20 is inserted and the next field is updated to
NULL. Thenext field of previous node is updated to store the address of new node.

Nul l 10 20

Insert (Start,30) - A new node with data 30 is inserted and the next field is updated to NULL.
Thenext field of previous node is updated to store the address of new node.

Null 10 20 30
.
L
P

Insert (Start,40) - A new node with data 40 is inserted and the next field is updated to NULL.
Thenext field of previous node is updated to store the address of new node.

Null 10 40 20 30

L
P

Routine to check whether a list is Empty

int IsEmpty(List L)
{
if (L->next==NULL)
return(1);
}

13
Null

if the answer is 1 result is true if the answer is 0 result is false

Routine to check whether the current position is last in the List

int IsLast(List L , position p)

if(p->next==NULL)

return(1);

10 40

20 30 Null
L P
if the answer is 1 result is true if the answer is 0 result is false

Routine to Find the Element in the List:

14
position find(List L,int X)

position p;

p=L-

>next; while(p!

=NULL && p-

>data!=X)p=p->next;

return(p);

Find(start,10) - To find an element start from the first node of the list and move to the next

with thehelp of the address stored in the next field.

10 40 20 30
Null

Find Previous
It returns the position of its predecessor.

15
position find previous (int X,list L)
.
{

position

p;p=L;

while(p->next!=NULL && p->next-

>data!=X)p=p->next;

return P;

Routine to Count the Element in the List:

void count(list L)

p=L->next;

while(p!=NULL)

count++;

p=p-

>next;

print count;

Null 10 40 20 30

16
Routine to Delete an Element in the List:
Delete (start,20) - A node with data 20 is found and the next field is updated to store
the NULL value. The next field of previous node is updated to store the address of node next
to the deleted node.

It deletes the first occurrence of element X from the list L

void delete(int x , List L)


{
position p,Temp;
p=find
previous(X,L); if(!
IsLast(p,L))
{
temp=p->next;
p->next=temp-
>next;free(temp);
.

Null 10 40 20 30

P Temp

After
deletion

Null 10 40 30

Start and Temp

Routine to Delete the List

Deletet(start->next) - To print start from the first node of the list and move to the next
with the help of the address stored in the next field.

17
void delete list(List L)

.{

position

P,temp;P=L-

>next;

L->next=NULL;

while(P!=NULL)

temp=P-

>next;

free(P);

P=temp;

Routine to find next Element in the List

void findnext(int X, List L)

position

P; P=L-

>next; while(P!

=NULL && P-

>data!=X)P->next;

return P->next;

it returns its position of successor.

18
1.4.2 DOUBLY- LINKED LIST:

. Doubly-linked list is a more sophisticated form of linked list data structure. Each
node of the list contain two references (or links) – one to the previous node and other to the
next node. The previous link of the first node and the next link of the last node points to
NULL. In comparison to singly-linked list, doubly-linked list requires handling of more
pointers but less information is required as one can use the previous links to observe the
preceding element. It has a dynamic size, which can be determined only at run time.

10 20 30 40

NULL L NULL

Basic operations of a singly-linked list are:

1. Insert – Inserts a new element at the end of the list.


2. Delete – Deletes any node from the list.
3. Find – Finds any node in the list.
4. Print – Prints the list.Algorithm:
The node of a linked list is a structure with fields data (which stored the value of the node),
*previous (which is a pointer of type node that stores the address of the previous node) and
*next(Which is a pointer of type node that stores the address of the next node)?

Two nodes *start (which always points to the first node of the linked list) and *temp (which is
used to point to the last node of the linked list) are initialized.
Initially temp = start, temp->prevoius = NULL and temp->next = NULL. Here, we take
the first node as a dummy node. The first node does not contain data, but it used because to
avoidhandling special cases in insert and delete functions.
Functions –
1. Insert – This function takes the start node and data to be inserted as arguments. New
node is inserted at the end so, iterate through the list till we encounter the last node.
Then, allocate memory for the new node and put data in it. Lastly, store the address of
the previous node in the previous field of the new node and address in the next field of
the new node as NULL.

19
2. Delete - This function takes the start node (as pointer) and data to be deleted as
arguments. Firstly, go to the node for which the node next to it has to be deleted, If that
node points to NULL (i.e. pointer->next=NULL) then the element to be deleted is not
present in the list.Else, now pointer points to a node and the node next to it has to be
removed, declare a temporary node (temp) which points to the node which has to be
removed.

Store the address of the node next to the temporary node in the next field of the
node pointer (pointer->next = temp->next) and also link the pointer and the node next
to the node to be deleted (temp->prev = pointer). Thus, by breaking the link we
removed the node which
is next to the pointer (which is also temp). Because we deleted the node, we no longer require
the memory used for it, free() will deallocate the memory.
3. Find - This function takes the start node (as pointer) and data value of the node (key) to
be found as arguments. First node is dummy node so, start with the second node.
Iterate throughthe entire linked list and search for the key.
Until next field of the pointer is equal to NULL, check if pointer->data = key. If it is then the
key is found else, move to the next node and search (pointer = pointer -> next). If key
is not found return 0, else return 1.

4. Print - function takes the start node (as pointer) as an argument. If pointer = NULL, then
thereis no element in the list. Else, print the data value of the node (pointer->data) and
move to the next node by recursively calling the print function with pointer->next sent as
an argument.

Performance:

1. The advantage of a singly linked list is that we don‟t need to keep track of the
previousnode for traversal or no need of traversing the whole list for finding the previous
node.
2. The disadvantage is that more pointers needs to be handled and more link need to
updated.

20
General declaration routine:

typedef struct node


*position;struct node
{
int data;
position
prev;
position

Initially

NULL

Routine for insert an element:

void insert (int x,List L,position p)


{
.
position newnode;
newnode = malloc(sizeof(struct
node));if(newnode==NULL)
Fatal error (“out of
space”);else
{
newnode ->data = x;
newnode ->next =p-
>next; p->next ->prev
= newnode;p->next=
newnode; newnode -
>prev =p;
}

21
Insert(Start,10) - A new node with data 10 is inserted, the next field is updated to NULL and
the previous field is updated to store the address of the previous node. The next field of
previous node is updated to store the address of new node.

10

NULL L

Insert(Start,20) - A new node with data 20 is inserted, the next field is updated to NULL and
the previous field is updated to store the address of the previous node. The next field of
previous node is updated to store the address of new node.

10 20

NULL L NULL

Insert(Start,30) - A new node with data 30 is inserted, the next field is updated to NULL and
the previous field is updated to store the address of the previous node. The next field of
previous node is updated to store the address of new node.

10 20 30

NULL L NULL

Insert(Start,40) - A new node with data 40 is inserted, the next field is updated to NULL
and theprevious field is updated to store the address of the previous node. The next field of
previous node is updated to store the address of new node.

22
10 20 30

NULL L NULL P
. NULL

10 40 20 30

NULL L NULL NULL

Routine to displaying an element:

void display(List L)
{
p=L->next;
while (p!=NULL)
{
print p-
>data;p=p-
>next;
}
print NULL
}

Print(start) 10, 40, 20, 30


To print start from the first node of the list and move to the next with the help of the address
storedin the next field.

10 40 20 30

L NULL
NULL

23
Routine for deleting an element:
.

Void delete (int x ,List L)


{
Position p ,
temp;
p=find(x,L);
If(isLast(p,L))
{
temp =p;
p->prev->next=NULL;
free(temp);
}
else
{
temp = p;
p->prev->next=p-
next; p-next-prev = p-
>prev; free(temp);

Delete (start,10) - A node with data 10 is found, the next and previous fields are updated to
store the NULL value. The next field of previous node is updated to store the address of node
next to the deleted node. The previous field of the node next to the deleted one is updated to
store the address of the node that is before the deleted node.

10 40 20 30

NULL L NULL

24
40 20 30

NULL L L
NULL

Find (start,10) „Element Not Found‟

To print start from the first node of the list and move to the next with the help of the address
storedin the next field.
.

40 20 30

NULL L NULL

1.4.3 CIRCULAR LINKED LIST:

Circular linked list is a more complicated linked data structure. In this the elements can
be placed anywhere in the heap memory unlike array which uses contiguous locations.
Nodes in a linked list are linked together using a next field, which stores the address of the
next node in the next field ofthe previous node
i.e. each node of the list refers to its successor and the last node points back to the
first node unlike singly linked list. It has a dynamic size, which can be determined only at run
time.
Basic operations of a singly-linked list are:

1. Insert – Inserts a new element at the end of the list.


2. Delete – Deletes any node from the list.
3. Find – Finds any node in the list.
4. Print – Prints the list.Algorithm:

25
The node of a linked list is a structure with fields data (which stored the value of
the node)and*next (which is a pointer of type node that stores the address of the next node).
Two nodes *start (which always points to the first node of the linked list) and *temp
(which is usedto point to the last node of the linked list) are initialized.
Initially temp = start and temp->next = start. Here, we take the first node as a dummynode. The
first node does not contain data, but it used because to avoid handling special cases ininsert
and delete functions.

Functions –
1. Insert – This function takes the start node and data to be inserted as arguments. New
node is inserted at the end so, iterate through the list till we encounter the last node.
Then, allocate memory for the new node and put data in it. Lastly, store the address
of the first node (start) in the next field of the new node.
2. Delete - This function takes the start node (as pointer) and data to be deleted as
arguments. Firstly, go to the node for which the node next to it has to be deleted, If
that node points to NULL (i.e. pointer->next=NULL) then the element to be deleted is
not present in the list. Else, now pointer points to a node and the node next to it has
to be removed, declare a temporary
node (temp) which points to the node which has to be removed. Store the address of
the node next to the temporary node in the next field of the node pointer (pointer-
>next = temp->next).
. Thus, by breaking the link we removed the node which is next to the pointer

(which is also temp). Because we deleted the node, we no longer require the
memory used for it, free() will deallocate the memory.

3. Find - This function takes the start node (as pointer) and data value of the node (key)
to be found as arguments. A pointer start of type node is declared, which points to
the head node of the list (node *start = pointer). First node is dummy node so, start
with the second node. Iterate through the entire linked list and search for the key.
Until next field of the pointer is equal to start, check if pointer->data = key. If it is then the
keyis found else, move to the next node and search (pointer = pointer -> next). If key
is not found return 0, else return 1.

26
4. Print - function takes the start node (as start) and the next node (as pointer) as
arguments.
If pointer = start, then there is no element in the list. Else, print the data value of the
node (pointer->data) and move to the next node by recursively calling the print
function with pointer->next sent as an argument.

Performance:
 The advantage is that we no longer need both a head and tail variable to keep track of
the list. Even if only a single variable is used, both the first and the last list elements
can be found in constant time. Also, for implementing queues we will only need one
pointer namely tail, to locate both head and tail.
 The disadvantage is that the algorithms have become more complicated.
Example:

Initially

Insert(Start,2) - A new node with data 2 is inserted and the next field is updated to store
the address of start node. The next field of previous node is updated to store the address of

start temp
new node.

Insert(Start,6) - A new node with data 6 is inserted and the next field is updated to store
the address of start node. The next field of previous node is updated to store the address
of newnode.

Start temp

27
Insert(Start,1) – A new node with data 1 is inserted and the next field is updated to store the
addressof start node. The next field of previous node is updated to store the address of new
node.

Start temp

Insert(Start,10) – A new node with data 6 is inserted and the next field is updated to store
the addressof start node. The next field of previous node is updated to store the address of
new node.

Start temp

Insert(Start,8) - A new node with data 6 is inserted and the next field is updated to store the
addressof start node. The next field of previous node is updated to store the address of new
node.

Start temp

Print(start->next) 2 ,6 ,1 ,10 ,8

Start temp

28
To print start from the first node of the list and move to the next with the help of

.
the address stored in the next field.

Delete(start,1) - A node with data 1 is found and the next field is updated to
store the NULL value.The next field of previous node is updated to store the address of node
next to the deleted node.

temp
Start Pointer

Start temp

Find (start,10)
“Element Found‟

Start temp

To find, start from the first node of the list and move to the next with the help of the address
storedin the next field.

29
1.5 APPLICATIONS OF LIST

1.5.1 Polynomial Manipulation

Polynomial manipulations such as addition, subtraction & differentiation etc.. can be


performedusing linked list
Declaration for Linked list implementation of Polynomial ADT

struct poly
{
int coeff;
int
power;
struct poly * Next;
}*list1,*list2,*list3;

Creation of the Polynomial :

poly create(poly*head1,poly*newnode1)
{
poly *ptr; if(head1==NULL)
{
head1=newnode1;return (head1);
}
else
{
ptr=head1;
while(ptr->next!=NULL)ptr=ptr->next;
ptr->next=newnode1;
}
return(head1);
}

30
Addition of two polynomials

void add()
{
poly*ptr1,*ptr2,*newnode;ptr1=list1;
ptr2=list2; while(ptr1!=NULL&&ptr2!=NULL)
{
newnode=malloc(sizeof(struct poly));
if(ptr1->power==ptr2->power)
{
newnode->coeff=ptr1-coeff+ptr2->coeff;
newnode->power=ptr1->power;
newnode->next=NULL;
list3=create(list3,newnode);
ptr1=ptr1->next;
ptr2=ptr2->next;
}
else
{
if(ptr1-power>ptr2-power)
{
newnode->coeff=ptr1->coeff;
newnode->power=ptr1->power;
newnode->next=NULL;
list3=create(list3,newnode);
ptr1=ptr1->next;
}
else
{
newnode->coeff=ptr2->coeff;
newnode->power=ptr2->power;
newnode->next=NULL;
list3=create(list3,newnode);
ptr2=ptr2->next;
}
}
}

31
Subtraction of two polynomial

void sub()
{
poly*ptr1,*ptr2,*newnode;ptr1=list1;
ptr2=list2; while(ptr1!=NULL&&ptr2!
=NULL)
{
newnode=malloc(sizeof(struct poly));
if(ptr1->power==ptr2->power)
{
newnode->coeff=(ptr1-coeff)-(ptr2->coeff);
newnode->power=ptr1->power;
newnode->next=NULL;
list3=create(list3,newnode);
ptr1=ptr1->next;
ptr2=ptr2->next;
}
else
{
if(ptr1-power>ptr2-power)
{
newnode->coeff=ptr1->coeff;
newnode->power=ptr1->power;
newnode->next=NULL;
list3=create(list3,newnode);
ptr1=ptr1->next;
}
else
{
newnode->coeff=-(ptr2->coeff);
newnode->power=ptr2->power;
newnode->next=NULL;
list3=create(list3,newnode);
ptr2=ptr2->next;
}
}
}

32
1.5.2 RADIX SORT

Radix Sort is a linear sorting algorithm that sorts elements by processing them
digit by digit. It is an efficient sorting algorithm for integers or strings with fixed-size keys.
Rather than comparing ele ments directly, Radix Sort distributes the elements into buckets
based on each digit’s value. By repeatedly sorting the elements by their significant digits,
from the least significant to the most significant, Radix Sort achieve sthe final sorted order.

Radix Sort Algorithm

The key idea beh nd Radix Sort is to exploit the concept of place value. It
assumes that sorting numbers digit by digit will eventually result in a fully sorted list. Radix
Sort can be performed usi ng different variations, such as Least Significant Digit (LSD)
Radix Sort or Most Significant Digit (MSD) Radix Sort.

How does Radix Sort Algorithm work?

To perform radix sort on the array [170, 45, 75, 90, 802, 24, 2, 66], we follow these steps:

Step 1: Find the largest element in the array, which is 802. It has three digits, so we will
iterate three times, once for each significant place.

Step 2: Sort the elements based on the unit place digits (X=0). We use a stable sorting
technique, such as counting sort, to sort the digits at each significant place.

33
Sorting based on the unit place:

 Perform counting sort on the array based on the unit place digits.
 The sorted array based on the unit place is [170, 90, 802, 2, 24, 45, 75, 66].

Step 3: Sort the elements based on the tens place digits.

Sorting based on the tens place:


 Perform counting sort on the array based on the tens place digits.
 The sorted array based on the tens place is [802, 2, 24, 45, 66, 170, 75, 90].

Step 4: Sort the elements based on the hundreds place digits.

Sorting based on the hundreds place:

 Perform counting sort on the array based on the hundreds place digits.
 The sorted array based on the hundreds place is [2, 24, 45, 66, 75, 90, 170, 802].

34
Step 5: The array is now sorted in ascending order.
The final sorted array using adix sort is [2, 24, 45, 66, 75, 90, 170, 802].

Below is the implementation for the above illustrations:


// C++ implementation of Radix Sort
#include <iostream>
using namespace std;
// A utility function to get maximum
// value in arr[]
int getMax(int arr[], int n)
{
int mx = arr[0];
for (int i = 1; i < n; i++)
if (arr[i] > mx)
mx = arr[i];
return mx;
}
// A function to do counting s ort of arr[]

35
// according to the digit
// represented by exp.
void countSort(int arr[], int n, int exp)
{
// Output array
int output[n];
int i, count[10] = { 0 };
// Store count of occurrences
// in count[]
for (i = 0; i < n; i++)
count[(arr[i] / exp) % 10]+
+;
// Change count[i] so that count[i]

// now contains actual position


// of this digit in output[]
for (i = 1; i < 10; i++)
count[i] += count[i - 1];
// Build the output array
for (i = n - 1; i >= 0; i--) {
output[count[(arr[i] / exp) % 10] - 1] = arr[i];
count[(arr[i] / exp) % 10]--;
}

// Copy the output array to arr[],


// so that arr[] now contains sorted
// numbers according to current digit
for (i = 0; i < n; i++)
arr[i] = output[i];
}
// The main function to that sorts arr[]
// of size n using Radix Sort
void radixsort(int arr[], int n)

36
{
// Find the maximum number to
// know number of digits
int m = getMax(arr, n);
// Do counting sort for every digit.
// Note that instead of passing digit
// number, exp is passed. exp is 10^i
// where i is current digit number
for (int exp = 1; m / exp > 0; exp *= 10)
countSort(arr, n, exp);
}
// A utility function to print an array
void print(int arr[], int n)
{
for (int i = 0; i < n; i++)
cout << arr[i] << " ";
}
// Driver Code

int main()
{
int arr[] = { 170, 45, 75, 90, 802, 24, 2, 66 };
int n = sizeof(arr) / sizeof(arr[0]);
// Function Call
radixsort(arr, n);
print(arr, n);
return 0;
}

Output
2 24 45 66 75 90 170 802

37
1.5.3 MULTILISTS

• Basic concept: A multi-list is a data structure in which there are multiple links from
a node.
a. In a general multi-list, each node can have any number of pointers to other
nodes and there may or may not be inverses for each pointer.
b. Multi-lists is a technique in which multiple lists are embedded into a single
structure. In multi-list the pointers are used to make multiple link routes
through data.

1. Example 1: Multiple orders of one set of elements

Suppose, we have the names and ages of the students as (Monika, 25), (Varsha,
23), (Supriya, 27), (Swati, 24). We can order the above elements alphabetically (By name) or
we can order them by numbers (By age). Here the solid link is for ascending order of names
and the dotted link is
for ascending order of age.

2. Sparse matrix representation using multi-lists

• Defintion: A sparse matrix is a kind of matrix which has a very few non zero elements,
as compared to the size m x n of the matrix.

• For example - If the matrix is of size 100 x 100 and only 10 elements are non zero. The
concept of sparse matrix has came forward in computer science to investigate such a

38
representation which will store only non-zero elements of the matrix and still carry out the
operations quite efficiently.

Representations of sparse matrices :

• The representation of sparse matrix will be a triplet only. In the sense that basically the
sparse matrix means very few non-zero elements having in it. Rest of the spaces are
having the values zero which are basically useless values or simply empty values. So in
this efficient representation we will consider all the non-zero values along with their
positions.

• The 0th row will store total rows of the matrix, total columns of the matrix and total

• For example - Suppose a matrix is 6 × 7 and number of non zero terms are say 8. In our
sparse matrix representation the matrix will be stored as

In above representation, the otal number of rows and columns of the matrix (6 × 7) are given
in 0th row. Also total number of non-zero elements are given in 0th row of value column i.e. 8.

Here the above array is say named "Sparse". Then

Sparse [0][0] = 6

Sparse [0][1] = 7

Sparse [0][2] = 8

39
Sparse [1][0] = 0

Sparse [1][1]=6

Sparse [1][2] =-10 and so n.

While representing the sparse matrix using linked list, we will make use of two types of
nodes, header node and element node.

40
Example 2.For the given sparse matrix, write the diagrammatic linked list
representation.

41
QUESTION BANK
PART A

1. What is an Abstract Data Type (ADT)?


A) A type of data structure that specifies only the type of operations allowed on it and
hides the implementation details.
B) A concrete data structure that defines how data is stored and manipulated.
C) A programming language specific data type.
D) A type of database.
Answer: A

2. Which of the following is a characteristic of the List ADT?


A) Fixed size
B) Elements are stored in a hash table
C) Allows duplicate elements
D) Elements are accessed using keys
Answer: C

3. In an array-based implementation of a list, what is the primary disadvantage?


Fast access time
B) Fixed size
C) Efficient insertion and deletion
D) Dynamic resizing
Answer: B

4. In a singly linked list, what does each node contain?


A) Data and a reference to the previous node
B) Only data
C) Data and a reference to the next node
D) Data, a reference to the next node, and a reference to the previous node
Answer: C

42
5. How does a circularly linked list differ from a singly linked list?
A) Nodes contain references to both next and previous nodes
B) The last node points to the first node
C) Nodes are organized in a hierarchical structure
D) It uses arrays instead of nodes
Answer: B

6. What is a key advantage of a doubly-linked list over a singly linked list?


A) Requires less memory
B) Simpler implementation
C) Easier to traverse in both directions
D) Faster access time
Answer: C

7. Which of the following is an application of linked lists?


A) Managing dynamic memory
B) Implementing LIFO stacks
C) Implementing FIFO queues
D) All of the above
Answer: D

8. How are polynomials typically represented using linked lists?


A) Using an array
B) Using a tree structure
C) Using a linked list where each node contains a coefficient and exponent
D) Using a hash table
Answer: C

9. What is the primary advantage of using Radix Sort?


A) It is a comparison-based sort
B) It has a worst-case time complexity of O(n log n)
C) It can sort numbers in linear time for a fixed range
D) It is easy to implement
Answer: C

43
10. What is a multilinked list?
A) A list where each node contains multiple references to other nodes
B) A list where each element can point to multiple lists
C) A list implemented using multiple arrays
D) A hierarchical list structure
Answer: A

11. Which of the following is true about array-based implementation of lists?


A) It allows for constant time insertions and deletions
B) It has a dynamic size that grows as needed
C) It has fast random access times
D) It uses pointers to link elements

12. In the context of linked lists, what is a "sentinel node"?


A) A node that marks the end of the list
B) A dummy node used to simplify list operations
C) A node that contains special data
D) A node used for circular lists only
Answer: B

13. An ADT is a mathematical model for data types where data type is defined by its behavior
from the point of view of a user of the data.
Answer: Operations

14. A list ADT allows operations such as insertions, deletions, and accesses to elements at any
.
Answer: Position

15. In an array-based implementation of a list, elements are stored in memory locations.


Answer: Contiguous

16. In a linked list implementation, each element points to the element in the sequence.
Answer : Next

17. A singly linked list contains nodes where each node has data and a pointer to the node.

44
Answer: Next

18. A doubly linked list allows traversal in both directions because each node has pointers to
both the and nodes.
Answer: Previous, Next

19. Lists can be used to implement other data structures such as stacks and .
Answer: Queues

20. In a polynomial ADT, each term is represented as a node containing a coefficient and an
.
Answer: Exponent

21. Radix sort is a non-comparative sorting algorithm that sorts numbers digit by digit starting
from the significant digit.
Answer: Least

--------------------------------------------------------------------------------------------------------------------
PART B

1. Define ADT. Give any two examples.


2. Distinguish between linear and nonlinear data structures.
3. Compare calloc() and realloc() function and mention its application in linked list.
4. Describe the differences between singly and doubly linked lists.
5. List out the areas in which data structures are applied extensively.
6. Define nonlinear data structure.
7. State the advantage of ADT.
8. List out the advantage of circular linked list.
9. Specify the use of Header node in a linked list.
10. Show the ways in which list ADT can be implemented.
11. Differentiate arrays and linked lists.
12. Analyze and write a find routine in array implementation.
13. Analyze and write the array representation of a polynomial: p(x) = 4x3 +6x2 +7x+9
14. Should arrays or linked lists be used for the following types of applications? Support your

45
justification.
i. Many search operations in sorted list.
ii. ii. Many search operations in Unsorted list.
15. Develop an algorithm for insertion operation in a singly linked list.
16. State the differences between singly linked list and doubly linked list. (Nov/Dec 2023)

17. Mention the use of Header node in a linked list. (Nov/Dec 2023)
18. Write the algorithm to implement the Push operation in a Stack ADT using an array.
(Nov/Dec 2024)
19. Identify the purpose of a Multilist in data structures. (Nov/Dec 2024)

PART C

1. Describe the following:


i. Applications of lists.
ii. Polynomial manipulation.
iii. What is a linked list?
2. Describe the suitable routine segments for any four operations.
3. List an algorithm to perform the following operations in a doubly linked list.
i. Insert a node at the end of the list.
ii. Delete the last node in the list.
4. i. Discuss the insertion and deletion procedures for cursor based linked lists.
iii. Give an algorithm for the deletion and reverse operations on doubly linked list.
5. i. Give the algorithm to perform insertion on a doubly linked list.
ii. Give the algorithm to perform deletion on a doubly linked list.
6. Write an algorithm to demonstrate a polynomial using a linked list for
i.Addition and Subtraction.
ii. Multiplication operations.
7. Analyze and write algorithm for Circular Linked list for the following operations using
structure pointer.
i. Create & Insert
ii. Delete & Display.
8. Explain the application of linked list in detail.

46
i. Radix sort.
ii. Multi list.
9. Consider an array A[1: n] Given a position, write an algorithm to insert an element in the
Array. If the position is empty, the element is inserted easily. If the position is already
occupied the element should be inserted with the minimum number of shifts. (Note: The
elements can shift to the left or to the right to make the minimum number of moves).
10. i. Illustrate the polynomial representation for 6x3 + 9x2 +7x +1 using linked list. Write
procedure to add and multiply two polynomial and explain with suitable example.
ii. What are the ways to insert a node in linked list? Write an algorithm for inserting a
node before a given node in a linked list.
11. Explain the steps involved in the following insertion operations in a Singly linked list.
i.Insert the node in the start and End.
ii. Insert the node in the middle of the List
12. Discuss an algorithm for linked list implementation of list.
15. Create an algorithm to add two polynomials using linked list.
16. Explain an algorithm to merge two sorted linked lists into a single sorted list.
17. Design algorithm for various operations performed on circular linked list. Extend the
algorithm defined in the previous question for the doubly linked circular list.
18. Examine the routines for the following operations
a) Find first element in a linked list
b) Find last element in a linked list
c) Find number of elements in a linked list. (Nov/Dec 2023)
19. Analyze the insert and delete operation of circular linked list using array. (Nov/Dec 2023)
20. Write the functions for a doubly link list to insertDll(), deleteDll() and searchDll() operations.
(Nov/Dec 2023)
20. Explain how to implement list with all operations. (Nov/Dec 2024)
21. Outline the difference between the advantages of linked list with array. (Nov/Dec 2024).
22. (i) Explain how polynomial manipulation can be done with the help of linked list. (Nov/Dec
2024).
23. (ii) Give algorithm to delete a node at front of a doubly linked list with suitable example.
(Nov/Dec 2024).

47
UNIT II
STACKS AND QUEUES
Stack ADT – Operations – Applications – Balancing Symbols – Evaluating arithmetic
expressions – Infix to Postfix conversion – Function Calls – Queue ADT –
Operations – Circular Queue – DeQueue – Applications of Queues.

2. 1 Stack ADT

Stack is abstract data type and linear data structure.

Abstract Data Type : Definition

A high-level conceptual description of a group of data values and the operations that may
be carried out on them is known as an Abstract Data Type (ADT). Without mentioning the
specific data structures or methods used for its implementation, it explains the
characteristics and behaviors of the data. ADTs are made to break down complicated data
into simpler, more understandable parts, putting some distance between the user and the
underlying data.

Characteristics of Abstract Data Types :

1. Encapsulation
2. Data Abstraction
3. Information Hiding

Significance of ADTs in Data Structures:

 Modularity and Reusability


 Ease of Upkeep
 Algorithmic Flexibility
 Ease of.
 Understandability

Practical Applications of ADTs :

1. Arrays and Lists

2. Stacks and Queues

3. Graphs and

48
4. Sets and Maps

5. Complex Data Structures

Common Abstract Data Types:

 Stack

 Queue

 List

 Array

 Linked List

 Tree

 Graph

 Set

 Map (Dictionary)

 Heap

 Priority Queue

 Hash Table

Key Concepts:

 Abstraction: By separating the implementation (how the data is stored and used) from
the interface (what you can do with the data), ADTs offer a certain level of abstraction.
This abstraction enables code modularity and encapsulation, which are crucial in
software engineering.

 Defined Operations: An ADT specifies a group of procedures that can be used to


operate on the data structure. Methods like insert, delete, retrieve, and traverse are
among these operations. ADTs aid in ensuring consistency and predictability by defining
what can be done with the data.

49
 Flexibility in Implementation: ADTs do not specify how the operations must be carried
out. This enables programmers to select the best implementation based on aspects like
speed, memory utilization, and particular use cases. As an illustration, arrays, linked
lists, or other data structures can be used to construct a list ADT.

 Data Hiding: ADTs frequently conceal from the user the internal workings of data
structures. Information concealing or encapsulation is what this is. Users of an ADT just
need to be aware of how to interact with the data using the prescribed operations; they
are not required to understand how the data is stored.

 Data Integrity: ADTs can impose restrictions on data integrity. For instance, a stack
ADT makes sure that elements are eliminated in LIFO, or last in, first out. This
consistency in behavior makes code design easier and lowers the possibility of
mistakes.

 Reusability: ADTs encourage the reuse of code. An ADT can be used in numerous
programs without modification once it has been defined. This reuse shortens the
development process and encourages precision in data handling.

 Interchangeability: The interchangeability of ADTs belonging to the same type. For


instance, as long as the interface stays the same, you can change from one
implementation of a list ADT (such as arrays) to another (such as linked lists) without
having an impact on the remaining code.

 Standardization: ADTs like lists, sets, and dictionaries (maps) are built into many
programming languages. These common ADTs are well-known and frequently applied
in software development.

 Data modeling: ADTs offer a means of conceptualizing and modeling actual data
structures and their activities. The conversion of actual problems into code is made
easier by this modeling.

50
 Testing and Debugging : ADTs make it simpler to test and troubleshoot. Without
having to be familiar with the implementation specifics, you may design test cases
based on the anticipated behavior of the abstract data type.

 High-Level Software Design : ADTs are essential to this process. They assist
designers and architects in defining the interfaces required for system components and
in selecting the best data structures for a given task.

2.1.1 STACK OPERATIONS IN DATA STRUCTURE

Stack Definition:

The Last-In-First-Out (LIFO) concept is used by Stacks, a type of linear data structure.
The Queue has two endpoints, but the Stack only has one (front and rear). It only has one
pointer, top pointer, which points to the stack's topmost member. When an element is
added to the stack, it is always added to the top, and it can only be removed from the stack.
To put it another way, a stack is a container that allows insertion and deletion from the end
known as the stack's top.

LIFO (Last in First out) Method

According to this method, the piece that was added last will appear first. As an actual
illustration, consider a stack of dishes stacked on top of one another. We may claim that the
plate we put last comes out first because the plate we put last is on top and we take the
plate at the top.

Due to the possibility of understating inventory value, LIFO is not a reliable indicator of
ending inventory value. Due to increasing COGS, LIFO leads to reduced net income (and
taxes). However, under LIFO during inflation, there are fewer inventory write-downs.
Results from average cost are in the middle of FIFO and LIFO.

Real Life Example of LIFO:

Let's say that business A sells 10 widgets. $100 per, the first five widgets were
delivered two days ago. The remaining five widgets, which cost $200 apiece, just got here
yesterday. The last items received are the first to go out on the market, according to the

51
LIFO technique of inventory management. How much may the accountant put down as a
cost even though seven widgets were sold?

Although the income from each widget is the same because of its fixed sales price, the
cost of each widget is determined by the inventory method used. The last item received is
the first item sold according to the LIFO mechanism. Accordingly, the $200 widgets were
the first to sell. Following that, the business sold two more of the $100 widgets. Five
widgets are priced at $200 each, and two are priced at $100, for a total cost of $1,200 for
the widgets using the LIFO approach.

As opposed to FIFO, which sells the $200 widgets last, the $100 widgets are sold first.
As a result, the price of the sold widgets, which breaks down to five at $100 and two at
$200, will be reported as $900. That's why LIFO generates bigger profits during price-
increasing periods.

Because of this, LIFO increases expenses during times of price growth and decreases
net revenue, which also lowers taxable income. Similar to this, LIFO reduces expenses and
raises net income during periods of declining prices, which also raises taxable revenue.

OPERATIONS ON STACK

Certain actions are made available to us so that we can manipulate a stack.

 push() to insert an element into the stack

 pop() to remove an element from the stack

 Top () Returns the top element of the stack.

 Is Empty () returns true if stack is empty else false.

 Size () returns the size of stack.

52
push()

It stacks up a new item. A stack overflow circumstance is when the stack is completely full.

Algorithm for push ():

begin
if stack is full
return
endif
else

increment top
stack[top] assign value end
else

end procedure

pop()

It takes something out of the stack. In the opposite sequence from which they were

pushed, the things are popped. The condition is referred to as an


underflow if the stack is
empty.

53
Algorithm for pop():

begin
if stack is empty
return
endif
else
store value of stack[top]
decrement top
return value
end else
end procedure

top()

It returns the stack's top element.

Algorithm for top():


begin
return stack[top]

end procedure

isempty()

The answer is true if the stack is empty and false otherwise.

54
begin
if top < 1
return true
else
return false
end procedure

Practical knowledge about stack:

There are several instances of stacks in everyday life. Take the straightforward
illustration of dishes stacked on top of one another in a canteen. The plate that is placed at
the bottom of the stack stays there for the greatest amount of time since the plate at the top
is the first to be taken out. Therefore, it is clear that it adheres to the LIFO/FILO order.

Analysis of Complexity:
1. Operations : Complexity
2. push() : O(1)
3. pop() : O(1)
4. isEmpty() : O(1)
5. size() : O(1)

Various Stack Types :

1. Register Stack: The memory unit also has a form of stack called a register that can
only handle a tiny amount of data. Because the capacity of the register stack is so little
in comparison to the memory, its height is constantly constrained.

2. Memory Stack: This kind of stack is capable of managing a lot of data from memory.
The memory stack can be any height because it uses a lot of memory space.

Some Stack Applications:

 Conversion from Infix to Postfix or Prefix

 There are redo-undo options in numerous locations, including editors like Photoshop.

55
 Web browsers have forward and backward functions.

 Utilized in a wide variety of algorithms, including the Tower of Hanoi, tree traversals,

stock span issues, and histogram problems.

 One of the strategies for constructing algorithms is backtracking. The Knight-Tour issue,

N-Queen dilemma, finding your way through a maze, and games like chess or checkers

are a few instances of backtracking. In each of these situations, we plunge into a

solution, but if that solution is inefficient, we return to the initial state and choose an

alternative path. We need a stack in order to save the prior state in order to return from

a current state.

Implementing a stack may be done in one of two ways :

 Using an array

 Linking lists

Implementation of Stack using Array :


// C program for array implementation of stack #include
<limits.h>
#include <stdio.h>
#include <stdlib.h>
// A structure to represent a stack
struct Stack {
int top;
unsigned capacity;
int* array;
};

// function to create a stack of given capacity. It initializes size of

56
// stack as 0
struct Stack* createStack(unsigned capacity)
{
struct Stack* stack = (struct Stack*)malloc(sizeof(struct Stack));
stack->capacity = capacity;
stack->top = -1;
stack->array = (int*)malloc(stack->capacity * sizeof(int));
return stack;
}
// Stack is full when top is equal to the last index
int isFull(struct Stack* stack)
{
return stack->top == stack->capacity - 1;
}
// Stack is empty when top is equal to -1
int isEmpty(struct Stack* stack)
{
return stack->top == -1;
}
// Function to add an item to stack. It increases top by 1
void push(struct Stack* stack, int item)
{
if (isFull(stack))
return;
stack->array[++stack->top] = item;
printf("%d pushed to stack\n", item);
}
// Function to remove an item from stack. It decreases top by 1

57
int pop(struct Stack* stack)
{
if (isEmpty(stack))
return INT_MIN;
return stack->array[stack->top--];
} // Function to return the top from stack without removing it
int peek(struct Stack* stack)
{
if (isEmpty(stack))
return INT_MIN;
return stack->array[stack->top];
} // Driver program to test above functions
int main()
{
struct Stack* stack = createStack(100);
push(stack, 10);
push(stack, 20);
push(stack, 30);
printf("%d popped from stack\n", pop(stack));
return 0;
}

OUTPUT:

10 pushed into stack


20 pushed into stack
30 pushed into stack
30 Popped from stack
Top element is : 20

58
Elements present in stack : 20 10
...........................................................
Process executed in 3.22 seconds
Press any key to continue.

Advantages of Implementation of Stack using Array :

 Simple to implement

 Since pointers are not used, memory is conserved.

Disadvantage of Implementation of Stack using Array :

 It doesn't change size in response to requirements as they arise since it is not dynamic.
(However, stacks can also expand and contract with array implementation in the case
of dynamically sized arrays like vector in C++, list in Python, and ArrayList in Java.)

 It is necessary to establish the stack's overall size in advance.

Concept of Stack

A Stack is data structure in which addition of new element or deletion of existing


element alwaystakes place at a same end. This end is known as the top of the stack.
That means that it is possible to remove elements from a stack in reverse order from the
insertion of elements into the stack.

One other way of describing the stack is as a last in, first out (LIFO) abstract data type
and lineardata structure.

Operations on Stack

The stack is basically performed two operations PUSH and POP. Push and pop are the
operationsthat are provided for insertion of an element into the stack and the removal of an
element from the stack, respectively.

PUSH : -PUSH operation performed for the adding item to the stack.

POP : - POP operation performed for removing an item from a stack.

59
Types of Implementation in Stack

1. Static Implementation (Array implementation of Stack)

2. Dynamic Implementation (Lined List Implementation of Stack)

Static Implementation of Stack Routine to push an element onto the stack

void push(int X,stack S)


{
if(Top==Arraysize-1)
Error(“Stack is full!!Insertion is not possible”);
else
{

Top=Top+1;S[Top]=X;

}
}
Routine to check whether a stack is full

int Isfull(Stack S)

if(Top==Arraysize-1)return(1);
}

Routine to check whether stack is empty


int Isempty(Stack S)
{
if(Top==-1)return(1);
}

pop routine
void pop(Stack S)
{
if(Top==-1)
Errro(“Empty stack! Deletion not possible”);

60
else
{
X=s[Top];Top=Top-1;

}
}
Routine to return top Element of the stack

int TopElement(Stack S)

{
if(Top==-1)
{
Error(“Empty stack!!No elements”);return 0;
}
else
return S[Top];
}
Linked list implementation of Stack
S
routine to check whether the stack is empty
int Isempty(Stack S)

{
if(S->next==NULL)
return(1); EMPTY
STACK
}

61
push routine /*Inserts element at front of the list
S

header

20 10 N

push routine /*Inserts element at front of the list

void push(int x, Stack S)


{
Position newnode,Top;

newnode=malloc(sizeof(Struct node));

newnode->data=X;
newnode->next=Top; S->next=newnode;
Top=newnode;
}
pop routine /*Deletes the element at front of list
void pop(Stack S)
{
Position temp,Top;
Top=S->next;
if(S->next==NULL)
Error(“empty stack! Pop not possible”);
else
{

62
temp=Top;
S->next=Top->next;free(temp);
Top=s->next
}}

Return Top Element


int Topelement(Stack S)
{
if(S->next==NULL)
{
error(“Stack is empty”);return 0;
else
return S->next->data;
}
Implementation of stack using ’C'

/* static implementation of stack*/


#include<stdio.h>
#include<conio.h>

63
#define size 5 int stack[size];int top;
void push()
{
int n;
printf("\n Enter item in stack");
scanf("%d",&n);
if(top==size-1)
{
printf("\nStack is Full");
}
else
{

top=top+1; stack[top]=n;

}
}void pop()
{
int item; if(top==-1)
{
printf("\n Stack is empty");

}
else
{

item=stack[top];
printf("\n item popped is=%d", item);top--;

}
}
void display()

{
int i;
printf("\n item in stack are");
for(i=top; i>=0; i--)
printf("\n %d", stack[i]);
}

void main()
{

64
char ch,ch1;
ch ='y';
ch1='y';top=-1;clrscr(); while(ch!
='n')
{
push();
printf("\n Do you want to push any item in stack y/n");
ch=getch();
}
display(); while(ch1!='n')
{
printf("\n Do you want to delete any item in stack y/n");ch1=getch();

pop();
}
display();getch();
}

65
Implementation of stack using 'C'

/* Dynamic implementation of stack*/


#include<stdio.h>
#include<conio.h>
#include<alloc.h> struct node
{
int item;
struct node *next;
};
struct node *top;
void push()
{
int n;
struct node *nw;
nw=(struct node*)malloc(sizeof(struct node));
printf("\n Enter item in stack:");
scanf("%d",&n);
nw->item=n;nw->next=0;if(top==0)
{

top=nw
}
else
{
nw->next=top;
top=nw;
void pop()
{
int item;
struct node *ptr;
if(top==0)
{
printf("\n Stack is empty");
}
else
{
item=top->item;ptr=top;
printf("\n item popped is=%d",
item); top=top->next;
free(ptr);
}

66
}

void display()
{
struct node *ptr;
printf("\n item in stack
are"); for(ptr=top; ptr!=0;
ptr=ptr->next)
printf("\n %d", ptr->item);
}
void main()
{
char ch,ch1;
ch ='y';
ch1='y';
top=0;
clrscr();
while(ch!='n')
{
push();
printf("\n Do you want to push any item in stack y/n");
ch=getch();
}
display();
while(ch1!='n')
{
printf("\n Do you want to delete any item in stack y/n");
ch1=getch();
pop();
}
display();
getch();

67
Output:

C Operator Precedence Table


This page lists C operators in order of precedence (highest to lowest). Their associativity
indicates in what orderoperators of equal precedence in an expression are applied.

Operator Description Associativity


() Parentheses (function call) (see Note 1)
[] Brackets (array subscript)
. Member selection via object name left-to-right
-> Member selection via pointer
++ -- Postfix increment/decrement (see Note 2)

68
++ -- Prefix increment/decrement
+- Unary plus/minus
!~ Logical negation/bitwise complement
(type) Cast (convert value to temporary value of
type) right-to-left
* Dereference
& Address (of operand)
sizeof Determine size in bytes on this
implementation
*/ % Multiplication/division/modulus left-to-right
+- Addition/subtraction left-to-right
<< >> Bitwise shift left, Bitwise shift right left-to-right
< <= Relational less than/less than or equal to
> >= Relational greater than/greater than or left-to-right
equal to
== != Relational is equal to/is not equal to left-to-right
& Bitwise AND left-to-right
^ Bitwise exclusive OR left-to-right
| Bitwise inclusive OR left-to-right
&& Logical AND left-to-right
|| Logical OR left-to-right
?: Ternary conditional right-to-left
= Assignment
+= -= Addition/subtraction assignment
*= /= Multiplication/division assignment
%= Modulus/bitwise AND assignment
&= right-to-left
^= |= Bitwise exclusive/inclusive OR assignment
<<= Bitwise shift left/right assignment
>>=
, Comma (separate expressions) left-to-right

2.1.2 STACK APPLICATIONS

69
Stacks are a fundamental data structure in computer science and are used in various
applications due to their Last In, First Out (LIFO) property. Here are some common
applications of stacks:

1. Expression Evaluation and Conversion

 Infix to Postfix/Prefix Conversion: Stacks are used to convert expressions from infix
notation (where operators are between operands) to postfix (operators after operands)
or prefix (operators before operands) notation.
 Postfix/Prefix Evaluation: Once converted, stacks can be used to evaluate these
expressions.

2. Function Call Management

 Recursion: The call stack is a stack data structure that stores information about active
subroutines or functions. It keeps track of function calls, their parameters, and local
variables.
 Backtracking: Many algorithms, such as those solving mazes or puzzles, use stacks to
remember their previous steps and backtrack when needed.

3. Syntax Parsing

 Compilers: During the parsing phase of compilation, stacks are used to check for
balanced parentheses, brackets, and braces in the code.
 Expression Parsing: Parsing expressions for evaluation or conversion often uses
stacks.

4. Undo Mechanisms

 Text Editors: Most text editors use stacks to implement the undo feature. Each action
performed by the user is pushed onto a stack, and undo operations pop actions off the
stack to revert to the previous state.

5. Depth-First Search (DFS)

 Graph Algorithms: DFS can be implemented using a stack to keep track of vertices
that need to be explored. This is essential in various applications like finding connected
components, topological sorting, and solving puzzles.

70
6. Memory Management

 Stack Memory Allocation: In many programming languages, function calls and local
variables are stored in stack memory. This allocation is efficient due to the LIFO nature
of stacks.

7. Browser Navigation

 Back/Forward Functionality: Web browsers use stacks to manage the history of web
pages visited. The back stack keeps track of the sequence of pages visited, allowing
users to go back and forth easily.

8. String Reversal

 Reversing a string can be efficiently done using a stack by pushing all characters of the
string onto the stack and then popping them to get the reversed string.

9. Balanced Parentheses Checking

 Checking whether an expression has balanced parentheses is a common use of stacks.


Each opening parenthesis is pushed onto the stack, and for each closing parenthesis,
the stack is popped to ensure correct pairing.

10. Tower of Hanoi

 This classic problem, which involves moving disks between pegs following specific
rules, can be solved using stacks to keep track of the disks and their movements.

2.2 BALANCING THE SYMBOLS

 Compilers check the programs for errors, a lack of one symbol will cause an error.
 A Program thet checks whether everything is balanced.
 Every right paranthesis should have its left paranthesis.
 Check for balancing the paranthesis brackets braces and ignore any other character

Read one character at a time until it encounters the delimiter `#'.


Step 1 : - If the character is an opening symbol, push it onto the stack.

Step 2 : - If the character is a closing symbol, and if the stack is empty report an error as

71
missing opening symbol.
Step 3 : - If it is a closing symbol and if it has corresponding opening symbol in the stack,
POPit from the stack. Otherwise, report an error as mismatched symbols.
Step 4 : - At the end of file, if the stack is not empty, report an error as Missing closing
symbol.Otherwise, report as Balanced symbols.

Let us consider the expression as ((B*B)-{4*A*C}/[2*A]) #

Thus all the paranthesis are balanced

Symbols Stack Postfix


Scanned form P
( (

( ((

A (( A

+ ((+ A

B ((+ AB

) (( AB+

/ (( / AB+

C (( / AB+C

) (( AB+C/

Evaluating arithmetic expressions involves calculating the result of an expression composed


of numbers, operators, and sometimes variables. Here are the key steps to evaluate an
arithmetic expression:

1. Identify the Operators and Operands:

 Operators include + (addition), - (subtraction), * (multiplication), / (division), and ^


(exponentiation).
 Operands are the numbers or variables that the operators act upon.

72
2. Order of Operations:

 Follow the order of operations, often remembered by the acronym PEMDAS:

1. Parentheses first
2. Exponents (ie Powers and Square Roots, etc.)
3. MD Multiplication and Division (left-to-right)
4. AS Addition and Subtraction (left-to-right)

3. Evaluate Parentheses:

 Simplify expressions inside parentheses first, starting with the innermost pair.

4. Evaluate Exponents:

 Next, calculate any exponents or roots.

5. Perform Multiplication and Division:

 Work from left to right performing all multiplication and division operations.

6. Perform Addition and Subtraction:

 Lastly, perform all addition and subtraction operations, again working from left to right.

Example Evaluation:

Let's evaluate the expression: 3+6×(5+4)−23 + 6 \times (5 + 4) - 2 3+6×(5+4)−2

1. Parentheses: Evaluate the expression inside parentheses: 3+6×9−23 + 6 \times 9 -


23+6×9−2
2. Multiplication: Perform the multiplication: 3+54−23 + 54 - 23+54−2
3. Addition and Subtraction: Perform addition and subtraction from left to right: 57−257 -
257−2
555555

So, the result of 3+6×(5+4)−23 + 6 \times (5 + 4) - 23+6×(5+4)−2 is 55.

73
Using Python for Evaluation

You can also use programming languages like Python to evaluate arithmetic expressions.
Here's a simple example using Python:

Python:
expression = "3 + 6 * (5 + 4) - 2"
result = eval(expression)
print(result)
# Output will be 55

The eval() function in Python can evaluate a string as a Python expression, which can
be useful for simple arithmetic expressions. However, be cautious with eval() as it can
execute arbitrary code and pose security risks if used with untrusted input.

2.3 EXPRESSION CONVERSION

Infix to Postfix
Infix :- Operators appears between operands Eg. A+B

A/B+C

Postfix:- Operators appears after the operands AB+

AB/C+

Prefix:- Operators appears before the operands

+AB

+/ABC

1. Evaluate Arithmetic Expressions

1. Convert the given infix expression ◻ Postfix expression

2. Evaluate the postfix expression using stack.

74
Infix expression:- A*B+(C-D/E)#

Read char Stack Output

A A

* A

B
AB
+
*

+ AB*

( AB*
(
+

C AB*C
(
+

- AB*C
-
(
+

75
D - AB*CD
(
+

/
/ - AB*CDE/-+
(
+

AB/C-DE*+AC*- Eg:- A/B-C+D*E-A*C


)

# AB*CDE/-+

Output: Postfix expression:- AB*CDE/-+

Example: Infix expression:- (a+b)*c/d+e/f#

Read char Stack Output

(
(

76
a a

a
+
+
(

b ab
+
(

) ab+

* ab+

ab+c
c

ab+c*
/

ab+c*d
d

77
ab+c*d/
+
/

ab+c*d/e

e /

/ ab+c*d/e
/
+

f ab+c*d/ef
/
+

# ab+c*d/ef/+

Postfix expression:- ab+c*d/ef/+

78
2.4 FUNCTION CALLS

When a call is made to a new function all the variables local to the calling routine
need to be saved, otherwise the new function will overwrite the calling routine variables.
Similarly the current location address in the routine must be saved so that the new function
knows where to goafter it is completed.

Balance()

Main() Push()

Call push()
Call
balance()

RECURSIVE FUNCTION TO FIND FACTORIAL

Int fact(int n)

int S;

if(n==1)

return(1);

else

S=n*fact(n-1);

return(S)

79
Read Character Stack

( (
(

)
(

{ {
(

}
(

[
[
(

]
(

80
2.5 QUEUES

Queues is a kind of abstract data type where items are inserted one end (rear end)
known as enqueue operation and deleted from the other end(front end) known as
dequeue operation.This makes the queue a First-In-First-Out (FIFO)
data structure. The queue performs the function of a buffer.

Fig.2.5.Queues

Implementation of Queues

1. Implementation using Array (Static Queue)


2. Implementation using Linked List (Dynamic Queue)

Operation on Queues

Operatio
Description
n
initialize(
Initializes a queue by adding the value of rear and font to -1.
)
enqueue(
Insert an element at the rear end of the queue.
)
dequeue(
Deletes the front element and return the same.
)
It returns true(1) if the queue is empty and return false(0) if the
empty()
queue is not empty.
It returns true(1) if the queue is full and return false(0) if the queue is not
full()
full.

A Queue is a widely used abstract data type (ADT) that follows the First In First Out
(FIFO) principle, where the first element added to the queue is the first one to be removed.
Here are the main operations associated with the Queue ADT:

1. Enqueue (Insert)

81
 Operation: enqueue(element)
 Example: If the queue is [1, 2, 3] and we enqueue 4, the queue becomes [1, 2, 3, 4].

2. Dequeue (Remove)

 Description: Removes and returns the element at the front of the queue.
 Operation: dequeue()
 Example: If the queue is [1, 2, 3] and we dequeue, the returned element is 1, and the
queue becomes [2, 3].

3. Front (Peek)

 Description: Returns the element at the front of the queue without removing it.
 Operation: front()
 Example: If the queue is [1, 2, 3] and we call front(), the returned element is 1, and
the queue remains [1, 2, 3].

4. IsEmpty

 Description: Checks if the queue is empty.


 Operation: isEmpty()
 Example: If the queue is [], isEmpty() returns True. If the queue is [1, 2, 3], isEmpty()
returns False.

5. Size

 Description: Returns the number of elements in the queue.


 Operation: size()
 Example: If the queue is [1, 2, 3], size() returns 3.

Implementation Example in Python

Here’s a simple implementation of a Queue using a list in Python:

class Queue:
def init (self):
self.items = []

82
self.items.append(item)
def dequeue(self):

if not self.isEmpty():
return self.items.pop(0)
else:
raise IndexError("dequeue from empty queue")
def front(self):
if not self.isEmpty():
return self.items[0]
else:
raise IndexError("front from empty queue")
def isEmpty(self):
return len(self.items) == 0
def size(self):
return len(self.items)

# Example usage:
q = Queue()

q.enqueue(1)

q.enqueue(2)

q.enqueue(3)

print(q.dequeue())

# Outputs: 1

print(q.front())

# Outputs: 2

print(q.isEmpty())

# Outputs: False

print(q.size())

# Outputs: 2

83
These implementations cover the fundamental operations of a Queue ADT.

Operation on queue:

Array

0 1 2 3 4 5 6 Max
=7

Rear = -1
Front = -1

0 1 2 3 4 5 6

Rear = 0
Front = 0

After Insertion

0 1 2 3 4 5 6

Front = 0 Rear = 4

84
After Insertion

0 1 2 3 4 5 6

10

Front = 3 Rear=3

After Deletion

0 1 2 3 4 5 6

10 11 12

Front=0 Rear = 5

ROUTINE TO INSERT AN ELEMENT IN A QUEUE

void Enqueue (int X)


{
if (rear == max _ Arraysize-
1)print (" Queue overflow");
else
{
Rear = Rear + 1;
Queue [Rear] =
X;
}
}

85
ROUTINE FOR DEQUEUE

void delete ( )
{
if (Front < 0)
print (" Queue
Underflow");else
{
X = Queue
[Front];if (Front =
= Rear)
{
Front = 0;
Rear = -1;
}
else
Front = Front + 1 ;
}
}

APPLICATION OF QUEUES :

Queues are mostly used in operating systems.

 Waiting for a particular event to occur.


 Scheduling of processes.
IMPLEMENTATION OF QUEUE USING LINKED LIST

Enqueue operation is performed at the end of the list and Dequeue


operation is performedat the front of the list.

QUEUE ADT

10 20 NULL

86
Struct Node;
typedef Struct Node *
Queue;int IsEmpty (Queue
Q); Queue CreateQueue
(void); void MakeEmpty
(Queue Q);
void Enqueue (int X, Queue
Q);void Dequeue (Queue
Q); Struct Node
{
int Element;
Struct Node *Next;
}* Front = NULL, *Rear = NULL;
int IsEmpty (Queue Q) // returns boolean value /

{ // if Q is empty
if (Q◻Next = = NULL) // else returns 0

return (1);
}

87
DECLARATION FOR LINKED LIST IMPLEMENTATION OF QUEUE ADT

ROUTINE TO CHECK AN EMPTY QUEUE

Struct CreateQueue ( )
{
Queue Q;
Q = Malloc (Sizeof (Struct Node));
if (Q = = NULL)
Error ("Out of Space");
MakeEmpty (Q);
return Q;
}
void MakeEmpty (Queue Q)
{
if (Q = = NULL)
Error ("Create Queue First");

else
while (! IsEmpty (Q)
Dequeue (Q);

i)

10 20 NULL

Dequeue

ii)
20 NULL

Empty Queue
88
iii)

ROUTINE TO ENQUEUE AN ELEMENT IN QUEUE

void Enqueue (int X)


{
Struct node *newnode;
newnode = Malloc (sizeof (Struct
node));if (Rear = = NULL)
{
newnode ◻data = X;
newnode◻Next =
NULL;Front =
newnode;
Rear = newnode;
}
else
{
newnode ◻data = X;
newnode ◻Next =
NULL;Rear ◻next =
newnode; Rear =
newnode;
}}

Example:10,20,30

89
Data Next

10 i) NULL

Newnode Front Rear

ii)

10 1000 20 NULL

Front Rear Newnode

 Assign Newnode as rear

iii)
110000 20 3000 30 NULL

Front Rear Newnode

 Assign Newnode as rear

ROUTINE TO DEQUEUE AN ELEMENT FROM THE QUEUE

90
void Dequeue ( )
{
Struct node
*temp; if (Front =
= NULL)
Error("Queue is
underflow");else
{
temp = Front;
if (Front = = Rear)
{
Front =
NULL;Rear =
NULL;
}
else
Front = Front
◻Next;Print
(temp◻data); free
(temp);
}
}

10 NULL
1000 20 3000 30

Front Rear
Assign the front element as temp and front◻next as front to delete the temp
data.

20 3000 30 NULL

Front Rear

91
Assign the front element as temp and front◻next as front to delete the temp

30 NULL

data.

Front Rear

and rear are equal assign to null and delete the corresponding front

element Front =rear=NULL

Array implementation of Queue

In this type of implementation we use array structure for data storage.


#include<stdio.h>
#include<conio.h>
#define SIZE 5
int front=-1;int rear=-1; int q[SIZE];

void insert(); void del(); void display();

void main()
{
int choice;clrscr(); do
{
clrscr();
printf("\t Menu"); printf("\n 1. Insert");
printf("\n 2. Delete");
printf("\n 3. Display ");printf("\n 4. Exit");

printf("\n Enter Your Choice:");scanf("%d",&choice); switch(choice)


{
case 1:
break;
insert();
display();
getch();
case 2:

92
del(); display();
getch();
break;

case 3:

display();g
etch();
break;

case 4:

printf("End of Program. !!!!");


getch();
exit(0);

}
}while(
choice!=4);
}

void insert()
{
int no;
printf("\n Enter No.:");
scanf("%d",&no);

if(rear < SIZE-1)


{

q[++rear]=no;if(front==-1)
front=0;// front=front+1;

else
{

printf("\n Queue overflow");

}
}void del()

93
{
if(front==-1)

{
printf("\nQueue Underflow");return;
}
else
{
printf("\nDeleted Item:-->%d\n",q[front]);

}
if(front==rear)
{
front=-1;rear=-1;

}
else
{

front=front+1;

}
}
void display()
{
int i; if(front==-1)
{
printf("\nQueue is empty..................................");
return;
}
for(i=front;i<=rear;i++)
printf("\t%d",q[i]);
}

94
Dynamic Queue

Structure of Dynamic Queue.struct link

int info;

struct link *next;

}*front,*rear;

Linked list implementation of Queue

#include<stdio.h> #include<conio.h>struct link


{
int info;
struct link *next;
}*front,*rear;

void insert_q(int no);int delete_q();

void display();

void main()
{
int ch,no; rear=NULL; front=NULL;clrscr();

95
while(1)
{
printf("\n Dynemic Queue Menu");
printf("\n 1->Insert ");
printf("\n 2->Delete
"); printf("\n 3-
>Display"); printf("\n
4->Exit ");
printf("\n enter your choice : ");

scanf("%d",&ch);

switch(ch)
{
case 1:
{
printf("Enter The Element Value\n");
scanf("%d",&no);
insert_q(no);
printf("\n queue after insertion");
display();
break;

}
case 2:
{

no=delete_q();
printf("\n The deleted element = %d",no);
printf("\n queue after deletion");
display();
break;
}

case 3:
{

printf("\n Queue is as follow");

display();
break;

96
case 4:

97
{

printf("Exit !!!!");
getch();
exit(0);
break

}
default :
;
printf("\n Wrong choice...!!!! " );
}
}
}

void insert_q(int no)


{
struct link *new1;
new1=(struct link*)malloc(sizeof(struct link));
new1->info=no;
new1->next=NULL;
if(rear==NULL||front==NULL)
{
front=new1;

}
else
{
rear->next=new1;
}
rear=new1;

} int no;
int delete_q()
if(front=
=NULL||
rear==N
ULL)
{ {
struct link *t; printf("\

98
n queue is Under Flow");
getch();
return;

99
2.5.1 CIRCULAR QUEUE

A circular queue is an abstract data type that contains a collection of data which allows
addition of data at the end of the queue and removal of data at the beginning of the queue.
Circular queueshave a fixed size.

Circular queue follows FIFO principle. Queue items are added at the rear end and the
items aredeleted at front end of the circular queue.

Fig.2.5.1.Circular Queue

Enqueue Routine in Circular List :

Void CEnqueue (int X,Circularqueue CQ)


{
if(Front==(rear+1)%Arraysize)

Error(“Queue is full!!Queue overflow”);

else if(front== -1)


{
front=front+1;

rear=rear+1;

CQ[rear]=x

rear=(rear+1)%Arraysize;CQ[rear]=X;
}
}
100
92

101
Empty Queue F = -1 R = -1

DEQUEUE ROUTINE IN CIRCULAR LIST :

void dequeue (circularqueue CQ)

if(Front== - 1)

Empty (“Empty Queue!”);

Else

If (Front==rear)
{

X=CQ[Front];

Front=-1;

Rear=-1;

X=CQ [Front];

Front = (Front+1) % Arraysize;


}
}

102
93

103
F= -1 F,R
R=-1 Empty Queue

Routine for display

void display (circularqueue CQ , int i , int j)

if(front==0&&rear==-1)

print("Queue is underflow”);

exit();

if(front>rear)

for(i=0;i<=rear;i++)

return CQ[i];

for(j=front;j<=max-1;j++)

return CQ[j];

return CQ[rear];

return CQ[front];

104
94

105
else

for(i=front;i<=rear;i++)

return CQ[i];

return Q[rear]; return Q[front];

95
106
}

Implementation of Circular Queue

#include<stdio.h>#define max 3
int q[10],front=0,rear=-1;void main()
{

int ch;

void insert();

void delet();

void display();

printf ("\nCircular Queue operations\n");

printf("1.insert\n2.delete\n3.display\n4.exit\n");

while(1)
{

printf("Enter your choice:");

scanf("%d",&ch);

switch(ch)
{

case 1:

insert();

break;

case 2:

delet();

107
96

108
break;

case 3:display();

break;

case 4:

exit();

default:

printf("Invalid option\n");
}}}

void insert()

int x;

if((front==0&&rear==max-1)||(front>0&&rear==front-1))

printf("Queue is overflow\n");
else

printf("Enter element to be insert:");

scanf("%d",&x);

if(rear==max-1&&front>0)

rear=0;

q[rear]=x;

else

if((front==0&&rear==-1)||(rear!=front-1))q[++rear]=x;
}}}

109
97

110
void delete()

int a;

if((front==0)&&(rear==-1))
{

printf("Queue is underflow\n");

getch();

}
exit();

if(front==rear)

a=q[front];

rear=-1;
front=0; }

else

if(front==max-1)

a=q[front];

front=0;
}

else a=q[front++];

printf("Deleted element is:%d\n",a);

111
98

112
void display()

int i,j; if(front==0&&rear==-1)


{

printf("Queue is underflow\n");

getch();
exit();

if(front>rear)

for(i=0;i<=rear;i++)

printf("\t%d",q[i]);

for(j=front;j<=max-1;j++)

printf("\t%d",q[j]); printf("\

nrear is at %d\n",q[rear]);

printf("\nfront is at %d\n",q[front]);

else

for(i=front;i<=rear;i++)

printf("\t%d",q[i]);

113
99

114
printf("\nrear is at %d\n",q[rear]);

printf("\nfront is at %d\n",q[front]);

}printf("\n");

5.5.2 DOUBLE-ENDED QUEUE

A double-ended queue is an abstract data type similar to an simple queue, it allows


you to insertand delete from both sides means items can be added or deleted from the front
or rear end.

Fig.2.5.2.Double-Ended Queue

In DEQUE, insertion and deletion operations are performed at both the ends.

Deletion Insertion
10 20 30 40
Insertion
Deletion

Input restricted DEQUE

EXCEPTIONAL
CONDITION
Output restricted DEQUE

INPUT RESTRICTED DEQUE:

Here insertion is allowed at one end and deletion is allowed at both ends.

115
100

116
Insertion
Deletion
Deletion

Front Rear
OUTPUT RESTRICTED DEQUE :

Here insertion is allowed at both ends and deletion is allowed at one end.

Insertion
Insertion
Deletion

Front
Rear

117
101

118
Routine For Insertion At Rear End

Void Insert_Rear(int X, DEQueue DQ)

If(REAR==Arraysize-1)

Error(“Full Queue!!!! Insertion not possible”);

else if(REAR == -1)

FRONT=FRONT+1;

REAR=REAR+1;

DQ[REAR] = X;

else

{ REAR=REAR+

1; Q[REAR]=X;

F R

102

119
F R

F R

Routine for Insertion at font end

Void Insert_Front(int X, DEQueue DQ)

If(FRONT==0)

Error(“Element present in Front!!!!! Insertion not possible”);

else

if(FRONT == -1)

{ FRONT=FRONT+

1; REAR=REAR+1;

DQ[FRONT] = X;

else

FRONT=FRONT-1;
DQ[FRONT]=X;

F R

103

120
F R

F R

Routine for Deletion from front end

Void Delete_Front(DEQueue DQ)

If(FRONT==-1)

Error(“Empty queue!!!! Deletion not possible”);

Else

if(FRONT==REAR)

{ X=DQ[FRONT];

FRONT=-1;

REAR=-1;

}
Else

{ X=DQ[FRON

T];

FRONT=FRONT+1;

121
104

122
F R

F R

F R

Routine for Deletion from Rear End

Void Delete_Rear(DEQueue DQ)

If(REAR==-1)

Error (“Empty queue!!!! Deletion not possible”);

else

if(FRONT==REAR)

{ X=DQ[REAR];

FRONT=-1;

REAR=-1;

Else

{ X=DQ[REAR];

REAR=REAR-1;

123
105

124
F R

F R

F R

APPLICATIONS OF QUEUE:

 Batch processing in an operating system

 To implement Priority Queues.

 Priority Queues can be used to sort the elements using Heap Sort.

 Simulation.

 Mathematics user Queueing theory.

 Computer networks where the server takes the jobs of the client as per the queue
strategy.

 Queues have numerous applications, including job scheduling, printer spooling,


breadth-first search, call centre management, CPU task management, and buffering.

 Queues are used to manage data flow and handle tasks in various applications, such
as operating systems, network protocols, and data processing systems.

106
125
.
Example:

Think of a queue at the railway station ticket counter. The person standing first in the
queue gets the ticket and leaves the group, while a new person joins at the back end of
the queue. This is at the center of the working of a queue, known as the First In First
Out principle

QUESTION BANK
PART A
1. Which of the following is a primary operation of a stack?
a) Enqueue
b) Dequeue
c) Push
d) Traverse
2. What is the typical application of a stack?

a) Job scheduling
b) Printer queue management
c) Expression evaluation
d) Network routing
3. What does the 'Pop' operation in a stack do?
a) Adds an element to the top of the stack
b) Removes an element from the top of the stack
c) Adds an element to the bottom of the stack
d) Removes an element from the bottom of the stack
4. Which data structure is most appropriate for checking balanced parentheses in an
expression?
a) Queue
b) Stack
c) Linked List
d) Array

126
107

127
5. What is the output of the balanced symbol algorithm for the expression "({[]})"?
a) Balanced
b) Unbalanced
c) Syntax error
d) None of the above
6. Which of the following methods is used to evaluate postfix expressions?
a) Recursion
b) Stack
c) Queue
d) Tree traversal
7. In which order are operators evaluated in postfix notation?
a) Right to left
b) Left to right
c) Top to bottom
d) Bottom to top
8. Which data structure is used for converting an infix expression to a postfix
expression?
a) Array
b) Queue
c) Stack
d) Linked List
9. What is the postfix form of the infix expression "A + B * C"?
a) ABC+*
b) AB+C*
c) A+BC*
d) ABC*+
10. Which data structure is commonly used for implementing function calls in recursion?
a) Queue
b) Stack
c) Linked List
d) Binary Tree

128
108

129
11. Which operation inserts an element at the rear of a queue?
a) Enqueue
b) Dequeue
c) Pop
d) Push
12. In a circular queue, how is the 'full' condition defined?
a) front == rear
b) (rear + 1) % size == front
c) front == (rear + 1) % size
d) rear == size - 1
13. Which of the following operations is not applicable for a simple queue but applicable
for a deque?
a) Enqueue
b) Dequeue
c) Insert at both ends
d) Remove from both ends
14. Which of the following is an application of queues?
a) Balancing symbols
b) Function call implementation
c) Printer job scheduling
d) Expression evaluation
15. A Stack ADT (Abstract Data Type) follows the principle.
Answer: Last In, First Out (LIFO)
16. The operation to add an element to the top of the stack is called .
Answer: Push
17. The operation to remove an element from the top of the stack is called .
Answer: Pop
18. is a real-life application of a stack where reversing is needed.
Answer: Undo functionality in text editors
19. In balancing symbols, the stack is used to ensure that every opening symbol has a
corresponding symbol.
Answer: Closing

130
109

131
20. An arithmetic expression in notation has the operator placed between the
operands.
Answer: Infix
21. Converting an infix expression to postfix expression uses a to manage
operators.
Answer: Stack
22. In postfix notation, also known as notation, operators follow their operands.
Answer: Reverse Polish
23. When a function calls another function, the return address is stored in a .
Answer: Stack
--------------------------------------------------------------------------------------------------------------
PART B

1. Point out the advantage of representing stack using a linked list than array.
2. Point out the rules followed during the infix to postfix conversions.
3. Compare the working of stack and queue data structure.
4. Develop an algorithm for inserting a new element into the stack.
5. What are priority queues? What are the ways to implement priority queue?
6. List any four applications of stack.
7. Given the prefix for an expression. Write its postfix: -*-+abc/ef-g/hi
8. Describe how the following "infix" expression is evaluated with the help of the help of Stack
: 5 * ( 6 + 2 ) - 12 / 4
9. Give the postfix and prefix forms of the expression: A + B* (C – D) / (P – R)
10. Define double ended queue.
11. List the applications of a queue.
12. What is circular queue?
13. Illustrate the difference between a queues and linked lists with an example.
14. For railway reservation the queue data structure is preferred –Justify.
15. Distinguish Stack from Queue. (Nov/Dec 2023)
16. The following postfix expression with single digit operands is evaluated using a stack: 8 2 3
^ / 2 3 * + 5 1 * - #. What will be on the top of the stack after the symbol ‘#’ is scanned?
(Nov/Dec 2023)
17. List any four applications of queue. (Nov/Dec 2024)

132
110

133
PART C
1. Describe with an example how to evaluate arithmetic expressions using stacks.
2. i. Explain array based implementation of stacks.
ii. Explain linked list implementation of stacks.
3. i. Describe about stack ADT in detail.
ii. Explain any one application of stack.
4. Explain the following expressions with an example.
i. Prefix and infix.
ii. Postfix.
5. i. Write an algorithm to convert an infix expression to a postfix expression.Trace the
algorithm to convert the infix expression (a+b)*c/d+e/f to a postfix expression.
ii. Justify the need for Infix and Postfix expression.
6. i. Give an algorithm for push and pop operations on stack using a linked list.
ii. Discuss about addition and deletion operations performed on a circular queue with
necessary algorithms.
7. i. Describe the process of postfix expression evaluation with an example.
ii. Describe the process of conversion from infix expression to postfix expression using
stack.
8. i. Write an algorithm that checks if expression is correctly parenthesized using stack
and illustrate with an example.
ii. Write the function to examine whether the stack is full() or empty()
9. i. Describe about queue ADT in detail.
ii. Explain any one application of queue with suitable example.
10. Develop and Show the simulation using stack for evaluation of the following
expression : 12 + 3 * 14 – (5 * 16) + 7
11. Assess the difference between double ended queue and circular queue. Show the
simulation using stack for the following expression to convert infix to postfix: p * q = (r-
s / t).
12. Develop an algorithm to explain Priority Queue, deQueue and the applications of
queues.

134
111

135
13. Develop an algorithm for deleting an element in a double ended queue. Convert the given
infix expression to postfix expression by applying suitable algorithm: (A-B*(C+D)-E*F)*G.
(Nov/Dec 2023)
14. Illustrate the array representation of STACK and implement PUSH ( ) & POP ( ) operation
with algorithm. (Nov/Dec 2023)
15. Develop a program to implement a Queue using Linked List. (Nov/Dec 2023)
16. a (i) Convert the following expression to postfix expression ((a + b) + (c * d) - (d + e)) * (g
+ h) (Nov/Dec 2024)
17. Interpret the stack operation in detail. (Nov/Dec 2024)
18. Explain the various operations on circular queue. Illustrate with examples(Nov/Dec 2024)
19.

----------------------------------------------------------------------------------------------------------------

136
112

137
UNIT III
TREES
Tree ADT – Tree Traversals – Binary Tree ADT – Expression trees – Binary Search Tree
ADT – AVL Trees – Priority Queue (Heaps) – Binary Heap algorithm - B Tree - B+ Tree.

3.1 Tree ADT


The tree is a nonlinear hierarchical data structure and comprises a collection of entities
known as nodes. It connects each node in the tree data structure using "edges”, both directed
and undirected.
The image below represents the tree data structure. The blue-colored circles depict the
nodes of the tree and the black lines connecting each node with another are called edges.
You will understand the parts of trees better, in the terminologies section.

The Necessity for a Tree in Data Structures:


Other data structures like arrays, linked-list, stacks, and queues are linear data
structures, and all these data structures store data in sequential order. Time complexity
increases with increasing data size to perform operations like insertion and deletion on these
linear data structures. But it is not acceptable for today's world of computation.
The non-linear structure of trees enhances the data storing, data accessing, and
manipulation processes by employing advanced control methods traversal through it. You will
learn about tree traversal in the upcoming section.
But before that, understand the tree terminologies.
Tree Node
A node is a structure that contains a key or value and pointers in its child node in the tree data

138
structure.
In the tree data structure, you can define the tree node as follows.

struct node
{
int data;
struct node *leftchild;
struct node *rightchild;
}

3.2 TREE TRAVERSALS


Tree traversal, also known as tree search, is a process of visiting each node of a tree
data structure. During tree traversal, you visit each node of a tree exactly once and perform an
operation on the nodes like checking the node data (search) or updating the node.

Tree traversal algorithms can be classified broadly in the following two categories by the
order in which the nodes are visited:

 Depth-first search (DFS) algorithm: It starts with the root node and first visits all
nodes of one branch as deep as possible before backtracking. It visits all other
branches in a similar fashion.
 Breadth-first search (BFS) algorithm: This also starts from the root node and
visits all nodes of current depth before moving to the next depth in the tree.

4 Types of Tree Traversal Algorithms:

1. Inorder traversal: Visits all nodes inside the left subtree, then visits the current node
before visiting any node within the right subtree.
2. Preorder traversal: Visits the current node before visiting any nodes inside left or right
subtrees.
3. Postorder traversal: Visits the current node after visiting all the nodes of left and right
subtrees.

139
4. Level order traversal: Visits nodes level-by-level and in left-to-right fashion at the same
level.

Below is the blueprint of our node class that will act as the atomic member of the tree data
structure. We will call it TreeNode, which is holding data as an integer value, left and right
children of the same type(TreeNode). You can use any other data structure to keep as data
under the TreeNode.

Public class TreeNode;


Int data;
TreeNode left;
TreeNode right;

Public TreeNode(int data)


{
this.data = data;
this.left = this.right = null;
}
}

1. Inorder Traversal

Inorder traversal is the one the most used variant of DFS traversal of the tree.

As DFS suggests, we will first focus on the depth of the chosen node and then go to the
breadth at that level. Therefore, we will start from the root node of the tree and go deeper-and-
deeper into the left subtree in a recursive manner.

When we reach the left-most node with the above steps, then we will visit that current node
and go to the left-most node of its right subtree, if it exists.

Same steps should be followed in a recursive manner to complete the inorder traversal. The
order of those steps will be similar, in recursive function:

140
1. Go to the left subtree.

2. Visit node.

3. Go to the right subtree.

Public void inorderTraversal(TreeNode root)

{
If(root != null)
{
inorderTraversal(root.left);

System.out.print(root.data + “ “);

inorderTraversal(root.right);

}
}

141
Inorder traversal of a binary search tree will always give you nodes in a sorted manner.

2. Preorder Traversal

Preorder traversal is another variant of DFS. The atomic operations in a recursive function are
the same as inorder traversal but in a different order.

Here, we visit the current node first and then go to the left subtree. After covering every node
of the left subtree, we will move toward the right subtree and visit in a similar fashion.

Order of the steps will be:

1. Visit node.

2. Go to the left subtree.

3. Go to the right subtree.

Public void preorderTraversal(TreeNode root)


{
If(root != null)
{
preorderTraversal(root.left);

System.out.print(root.data + “ “);

preorderTraversal(root.right);
}
}

142
1 2 4 8 5 3 6 9 10 7

4. Postorder Traversal

A similar process goes for th e postorder traversal, where we visit the left subtree and the right
subtree before visiting the current node in recursion.

So, the sequence of the steps will be:

1. Go to the left subtree.


2. Go to the right subtree.
3. Visit the node.

Public void postorderTraversal(TreeNode root)


{
If(root != null)
{
postorderTraversal(root.left);

System.out.print(root.data + “ “);

postorderTraversal(root.right);

143
}
}

8 4 5 2 9 10 6 7 3 1

4. Level Order Traversal

This is a different traversal than what we have covered above. Lev el order traversal follows
breadth-first search to visit/modify every node of the tree.

As BFS suggests, the breadt h of the tree takes priority first and thenoves to depth. In simple
m
words, we will visit all the nodes present at the same level one-by-one from left to right, and
then it moves to the next level to visit all the nodes of that level.

144
1 2 3 4 5 6 7 8 9 10

The implementation of level order traversal is slightly more challenging than the previous three
traversals. We will use a Queue(FIFO) data structure to implement level order traversal, where
after visiting a node, we put its left and right children to queue sequentially.

Here, the order of adding children in the queue is important, as we ha e to traverse left-to-right
at the same level. Check the gist below for a better understanding.

Public voidnlevelorderTraversal(TreeNode root)


{
If (root == null)
{
Return;
}
Queue<TreeNode>qu eue = new LinkedList<>();

Queue.add(root); While(!
queue.isEmpty())

{
TreeNode node=queue.remove();

145
System.out.print(node.data+ “ “);
If(node.left != null)
{
queue.add(node.left);
}
If(node.right != null)
{
queue.add(node.right);
}
}

Traversal strategy:

• Postorder traversal strategy

• Preorder traversal strategy

• Inorder traversal strategy

Preorder Traversal (Depth-First Order)

1. Visit the root


2. Traverse the left subtree in preorder.
3. Traverse the right subtree in preorder.

Postorder Traversal

1. Traverse the left subtree in postorder


2. Traverse the right subtree in preorder
3. Visit the root

Inorder Traversal (Symmetric Order)

1. Traverse the left subtree in inorder


2. Visit the root
3. Traverse the right subtree in inorder

146
Example

Preorder Traversal: ABDECF

▪ First, visit the Root A

▪ Now visit the left subtree in preorder .For left subtree, B is the root. So visit root B,
thenleft of B and then visit right of B. It gives the traversing order BDE.
▪ Start visiting the right subtree of A in preorder. For right subtree, C is the root. So
visit

the root C, then left of C (In the given example there is no left of C), and then visit right
of

C. Now the order is CF.

▪ Finally, the preorder traversal of above tree is ABDECF.

Inorder Traversal: DBEACF

▪ First visit the left subtree in inorder .For left subtree, B is the root. So visit left of B,
then root and then visit right of B. It gives the traversing order DBE.
▪ Now visit the Root A.

▪ Start visiting the right subtree of A in inorder. For right subtree, C is the root. So visit
left of C (In the given example there is no left of C), then root C and then visit right of
C. Nowthe order is CF.
▪ Finally, the inorder traversal of above tree is DBEACF.

Postorder Traversal: DEBFCA

▪ First visit the left subtree in postorder .For left subtree, B is the root. So visit the left
ofB,then visit right of B and then visit root B. It gives the traversing order DEB.

147
▪ Start visiting the right subtree of A in preorder. For right subtree, C is the root. So left
of

C (In the given example there is no left of C), then visit right of C and then visit root C.Now
the order is FC.
▪ Now visit the Root A

▪ Finally, the postorder traversal of above tree is DEBFCA.

Example

For the example given below, the traversals are given below:

Preorder

+-/+AB*CD*E-FGH

Inorder :

A+B/C*D-E*F-G+H

Postorder:

AB+CD*/EFG-*-H+

Left Child Right Sibling Data Structure:

Implementation of Tree

Tree can be implemented using linked list concept. A structure can be declared with 3
elements. One element contains the data. The second element points to the first child of the

148
tree. The

third element points to the next sibling of the first child. The syntax is given below:

Node Declaration for Trees

typedef struct TreeNode * PtrToNode;

struct TreeNode

ElementType element;

PtrToNode FirstChild;

PtrToNode NextSibling;
};

Example:

Explanation

▪ A is the root

▪ B is the first child of A

▪ C, D & E are the next siblings of B.

149
What Is a Tree Data Structure?
Before jumping into the tree traversal algorithms, let’s define tree as a data structure first. That will
help you to grasp the concepts in a meaningful way.

Tree is a hierarchical data structure that stores information in the form of hierarchy unlike linear
data structures like linked list, stack, etc. A tree contains nodes (data) and connections (edges) that
don’t form a cycle.

Tree Data Structure Terminology to Know

The following are the few frequently used terminologies for a tree data structure.
 Node: A node is a structure that may contain a value or condition, or represent a
separate data structure.
 Root: The top node in a tree, the prime ancestor.
 Child: A node directly connected to another node when moving away from the root, an
immediate descendant.
 Parent: The converse notion of a child, an immediate ancestor.
 Leaf: A node with no children.
 Internal node: A node with at least one child.
 Edge: The connection between one node and another.
 Depth: The distance between a node and the root.
 Level: the number of edges between a node and the root + 1
 Height: The number of edges on the longest path between a node and a descendant
leaf.
 Breadth: The number of leaves.
 Subtree: A tree T is a tree consists of a node in T and all of its descendants in T.
 Binary Tree: is a tree data structure in which each node has at most two children, which
are referred to as the left child and the right child.
 Binary Search Tree: is a special type of binary tree which has the following properties.
The left subtree of a node contains only nodes with keys lesser than the node’s key. The
right subtree of a node contains only nodes with keys greater than the node’s key. The
left and right subtree each must also be a binary search tree.

150
3.3 BINARY TREE ADT

• A binary tree is a tree in which no node can have more than two children.

• A binary tree is called strictly binary tree if every nonleaf node in the tree has nonempty
left and right subtrees i.e., every nonleaf node has two children.
• A complete binary tree of depth d is a strictly binary tree with all leaf nodes at level d.

A tree is a finite set of nodes having a distinct node called root. Binary Tree is a tree
which is either empty or has at most two subtrees, each of the subtrees also being a binary
tree. It means each node in a binary tree can have 0, 1 or 2 subtrees. A left or right subtree can
be empty.
A binary tree is made of nodes, where each node contains a "left" pointer, a "right"
pointer, and a data element. The "root" pointer points to the topmost node in the tree. The left
and right pointers point to smaller "subtrees" on either side. A null pointer represents a binary
tree with no elements -- the empty tree.

It has a distinct node called root i.e. 2. And every node has 0, 1 or 2 children. So it is a
binary tree as every node has a maximum of 2 children.
If A is the root of a binary tree & B the root of its left or right subtree, then A is the parent
or father of B and B is the left or right child of A. Those nodes having no children are leaf
nodes. Any node say, A is the ancestor of node B and B is the descendant of A if A is either the
father of B or the father of some ancestor of B. Two nodes having same father are called
brothers orSiblings.
Going from leaves to root is called climbing the tree & going from root to leaves is
calleddescending the tree.
A binary tree in which every non leaf node has non empty left & right subtrees is
called astrictly binary tree. The tree shown below is a strictly binary tree.

151
The no. of children a node has is called its degree. The level of root is 0 & the level of
any node is one more than its father. In the strictly binary tree shown above A is at level 0, B &
C at level 1, D & E at level 2 & F & g at level. The depth of a binary tree is the length of the
longest path from the root to any leaf. In the above tree, depth is 3.

Linked List Representation of Binary Tree

The structure of each node of a binary tree contains one data field and two pointers, each
for

the right & left child. Each child being a node has also the same structure. The structure of a
node is shown below.
Value
Left Right

Binary trees can be represented by links where each node contains the address of the left
child and the right child. If any node has its left or right child empty then it will have in its
respective link field, a null value. A leaf node has null value in both of its links.
Pictorial Represention of a Binary Tree

Address of the Left Child Data Element Address of the Right Child

Representation of a Leaf

Note: Addresses of the Left Child and Right child are NULL.

152
Left and Right Skewed Trees

The tree in which each node is attached as a left child of pal left skewed tree. The tree in
which each node is attached as a i node then it is called right skewed tree.

Representation of Trees

There are two ways of representing the binary tree.

1. Sequential representation

2. Linked representation.

Let us see these representations one by one.

1. Sequential Representation of Binary Trees or Array Representation

Each node is sequentially arranged from top to bottom and understand this matter by
numbering each node. The numbering of root node and then remaining nodes will give ever
increasing number direction. The nodes on the same level will be numbered from left to right.
The numbering will be as shown below.

153
Linked List Implementation

Binary Tree: Array Implementation

Binary tree can be implemented using array. The structure is given below. typedef struct
TreeNode *PtrToNode;
typedef struct TreeNode *Tree;struct TreeNode
{

ElementType Element;

Tree Left;
Tree Right;

};

3.3.2 EXPRESSION TREE

Expression tree is also a binary tree in which the leaf or terminal nodes are operands
and non-terminal or intermediate nodes are operators used for traversal.

154
To construct an Expression Tree the Following Steps are to be Followed

• Read out the given expression one symbol at a time.

• If the symbol is an operand, create a one-node tree and push a pointer to it onto a stack.

• If the symbol is an operator, pop pointers to two trees T1 and T2 from the stack and form
a new whose root is the operator and whose left and right children point to T2 and T1,
respectively.
• A pointer to this new tree is then pushed onto the stack.

Example

Input: ab+cde+**

1. The two symbols are operands, so create one-node trees and push pointers to
them onto astack.

2. Next + is read , so two pointers to trees are popped, a new tree is formed, a new tree is
formed, and a pointer to it is pushed onto the stack.
3. Now, c, d and e are read, and for each a one-node tree is created and a pointer to
the

corresponding tree is pushed onto the stack.


4. Now ‘+’ is read, so two trees are merged.
5. Continuing, ‘*’ is read, so we pop two tree pointers and form a new tree with a ‘*’ as root.
6. Finally, the last symbol ‘*’ is read, two trees are merged, and a pointer to the final tree is
left on the stack.

155
3.4 BINARY SEARCH TREE ADT

For every node, X, in the tree, the values of all the keys in its left subtree are smaller than
the key value of X, and the values of all the keys in its right subtree are larger than the key
value of X

BST: Implementation

Struct TreeNode;

typedef struct TreeNode *Position; typedef

struct TreeNode *SearchTree; SearchTree

MakeEmpty( SearchTree T );

156
Position Find (ElementType X, SearchTree T);

Position FindMin( SearchTree T );

Position FindMax( SearchTree T );

SearchTree Insert (ElementType X, SearchTree T );

SearchTree Delete (ElementType X, SearchTree T );

ElementType Retrieve (Position P);

Binary Search Tree Declaration

struct TreeNode

ElementType Element;

SearchTree Left;

SearchTree Right;
};

BST Implementation: MakeEmpty

This operation is mainly for initialization. The implementation follows recursive function call
technique.

Procedure

SearchTree MakeEmpty( SearchTree T )

{ if( T != NULL )

MakeEmpty( T-> Left );

MakeEmpty( T-> Right );

free( T );

157
}

return NULL;

• This function uses recursive function call.

• This function initially makes the left subtrees empty.

• Then the function makes the right subtrees empty.

BST Implementation: Find

This operation generally requires returning a pointer to the node in tree T that has key X,
orNULL if there is no such node.
• If T is NULL, then we can just return NULL.

• Otherwise, if the key stored at T is X, return T.

• Otherwise make a recursive call on a subtree of T, either left or right.

• If X is less than the key stored at T, then search Left subtree, otherwise search the
righttree recursively.
Procedure

Position Find( ElementType X, SearchTree T )

{ if( T == NULL )

return NULL;

if ( X < T ◻ Element )

return Find( X, T -> Left );

else if ( X > T -> Element )


return Find( X, T ◻ Right );
else
return T;

158
Note: Here, key denotes the data element.

• The function compares the searching element with the element in the root.

• If the element is less than the root, it searches the left subtree recursively.

• Otherwise it searches the right subtree recursively.

• This function returns the element if it is found in the tree.

• Otherwise it returns NULL

BST Implementation: FindMin

This routine will return the position of the smallest elements in the tree. To perform the
FindMin start at the root and go left as long as there is a left child. The stopping point is the
smallest element, which will be always the left most child of the tree.
Procedure

Position FindMin( SearchTree T )

{ if ( T == NULL )

return NULL;

else if( T -> Left == NULL ) return T;


else

return FindMin( T -> Left );

• In a Binary Search Tree, The minimum element is availabe in the left subtree.

• Hence, this function searches the left subtree of the tree.

• Searching takes place recursively.

• The last node in the left subtree contains the minimum element. If T-> left is NULL, then
it returns the element in that node.
• If no tree is available, i.e T= NULL, then it returns NULL.

159
BST Implementation: FindMax

This routine will return the position of the largest elements in the tree. To perform the
FindMax start at the root and go right as long as there is a right child. The stopping point is the
largest element.
Position FindMax( SearchTree T )
{

if ( T != NULL )
while ( T -> Right != NULL )T = T -> Right;
return T;
}
• In a Binary Search Tree, The maximum element is availabe in the right subtree.

• Hence, this function searches the right subtree of the tree.

• If T->right is not equal to NULL, T moves to T->right. This is done till T->right is equal to
NULL.
• The last node in the right subtree contains the maximum element. If T->right is NULL,
then it returns the element in that node.
• If no tree is available, i.e T=NULL, then it returns NULL.

BST Implementation: Insert

• To insert X in to tree T, proceed down the tree with a Find. If X is found do nothing
because BST will not have duplicate values. Otherwise, insert X at the appropriate spot
onthe path traversed.

BST Implementation: Insert

In this routine T points to the root of the tree, and the root changes on the first insertion.

160
Insert is written as a function that returns a pointer to the root of the new tree.

SearchTree Insert( ElementType X, SearchTree T )

{ if ( T == NULL )

T = malloc( sizeof( struct TreeNode ) );

if ( T == NULL )
FatalError( "Out of space!!!" );

else

T ◻ Element = X;

T ◻ Left = T -> Right = NULL;


}
}

else if ( X < T -> Element )

T -> Left = Insert( X, T -> Left );

else if( X > T -> Element )


T -> Right = Insert( X, T -> Right );

/* Else X is in the tree already; we'll do nothing */return T; /* Do not forget this line!! */
}

BST Implementation: Delete

If the node is a leaf it can be deleted immediately.

If the node has one child, the node can be deleted after its parents adjust a pointer to
bypass the node.
The following figure shows an initial tree and the result of a deletion; the key value is 2. It
isreplaced smallest data in its right subtree(3), and then that node is deleted as before.

161
BST Implementation: Delete

SearchTree Delete( ElementType X, SearchTree T )

{ Position TmpCell;

if ( T == NULL )
Error( "Element not found" );

else if ( X < T-> Element ) /* Go left */T -> Left = Delete( X, T -> Left );
else if ( X > T -> Element ) /* Go right */T -> Right = Delete( X, T -> Right );
else /* Found element to be deleted */
if ( T -> Left && T -> Right ) /* Two children */

{ TmpCell = FindMin( T -> Right );

T -> Element = TmpCell -> Element;

T -> Right = Delete( T -> Element, T-> Right );

else /* One or zero children */

{ TmpCell = T;

if ( T -> Left == NULL ) /* Also handles 0 children */T = T -> Right;

else if ( T -> Right == NULL )T = T -> Left;


free( TmpCell );

162
return T;

Program

// C program to demonstrate insert operation in binary search tree

#include<stdio.h>
#include<stdlib.h>struct node
{

int key;

struct node *left, *right;


};

// A utility function to create a new BST nodestruct node *newNode(int item)


{

struct node *temp = (struct node *)malloc(sizeof(struct node));

temp->key = item;
temp->left = temp->right = NULL;
return temp;
}

// A utility function to do inorder traversal of BSTvoid inorder(struct node *root)


{

if (root != NULL)

inorder(root->left);

printf("%d \n", root->key);

inorder(root->right);

163
}

/* A utility function to insert a new node with given key in BST */

struct node* insert(struct node* node, int key)

/* If the tree is empty, return a new node */

if (node == NULL) return newNode(key);

/* Otherwise, recur down the tree */

if (key < node->key)

node->left = insert(node->left, key);

else if (key > node->key)

node->right = insert(node->right, key);

/* return the (unchanged) node pointer */

return node;

} // Driver Program to test above functions

int main()

{ /* Let us create following BST

struct node *root = NULL;

root = insert(root, 50);

insert(root, 30);

164
insert(root, 20);

insert(root, 40);

insert(root, 70);

insert(root, 60);

insert(root, 80);

(root);

return 0;

Output:

20
30

40

50

60

70
80

165
3 . 5 AVL TREE

An AVL (Adelson-Velskii and Landis) tree is a binary search tree with a balance condition. For
every node in the AVL tree, the height of left subtree and the height of the right subtree can
differby at most one.
Height of left subtree- Height of right subtree < = 1

 The height of the left subtree minus the height of the right subtree of a node is called
the balance factor of the node. For an AVL tree, the balances of the nodes are always -
1, 0 or 1.
 The height of an empty tree is defined to be -1.

 Given an AVL tree, if insertions or deletions are performed, the AVL tree may not
remainheight balanced.

Balancing Trees

The tree becomes unbalanced whenever a node is inserted or deleted. The unbalanced
tree isbalanced by rotating the tree to the left or right.
There are four cases that require rebalancing. All unbalanced trees fall into one of these
fourcases:
1. Left of Left - A sub tree of a tree that is left high has become left high after the insertionof
an element.

130
166
Diagram

2. Right of right - A sub tree of a tree that is right high has become right high after insertingan
element.

Diagram

3. Right of Left - A sub tree of a tree that is left high has become right high after inserting an
element.
Diagram

167
131

168
4. Left of Right - A sub tree of a tree that is right high has become left high after inserting an
element.
Diagram

Rotation

An A VL tree becomes unbalanced when there is an insertion or deletion of a node from


anexisting A VL tree.
To balance the tree, transformation of the tree is performed. This transformation is
calledRotation.

Rotation is a technique in which an unbalanced A VL tree is rotated either left or right side,
sothat the tree becomes balanced again. i.e
| HL - HR|< = 1

There are two types of Rotation. They are:

• Single Rotation

• Double Rotation

Single Rotation: Rotation of the tree is performed only once either left or right side of the
unbalanced tree. Single Rotation is performed for the two cases namely left of left and right of
right
Double Rotation: Rotation of the tree is performed twice, one on the left side and the other
132
169
one on the right side and vice versa of the unbalanced tree. Double Rotation is performed for
the two cases namely left of right and right of left.

Single Rotation

Case 1: Left of Left: When the out-of-balance condition has been created by a left high
subtree of a left high tree, the tree is balanced by rotating the out-of-balance node to the right.
Diagram

Simple Right Rotation

Case 2: Right of Right: When the out-of-balance condition has been created by a right high
subtree of a right high tree, the tree is balanced by rotating the out-of-balance node to the left.

Double Rotation :

Case 3: Right of Left: Double rotation is needed for balancing the A VL tree in this case. The
tree is out of balance in which the root is left high and the left subtree is right high.
• The left subtree which is right high is rotated first in the left side.

• The root of the subtree which is left high is rotated second in the right side.

Basic Operations performed in an A VL Tree:

The basic operations performed in an A VL tree are as follows:

• Inserting an element.

• Deleting an element

Whenever these two operations are done, the balance condition has to be checked after the
operation. If the balance condition is not satisfied then rotation is performed. Single or double
rotation is performed depending upon the violation in the balance factor.
133
170
AVL Trees: Implementation

A VL tree is implemented using linked list concept. The node declaration for A VL trees is
given below:
struct AvlNode;

typedef struct AvlNode *Position;typedefstruct AvlNode * AvlTree;

AvlTree MakeEmpty( AvlTree T);

Position Find(ElementType X, AvlTree T);

Position FindMin( AvlTree T );

Position FindMax( AvlTree T);

AvlTree Insert( ElementType X, AvlTree T);

AvlTree Delete( ElementType X, AvlTree

T); ElementType Retrieve( Position P );

struct AvlNode

ElementType Elemcnt; A vlTree Left;

AvlTree Right;int Height;

};

A structure AvlNode is created which contains four data elements. They are:

• The data to be stored in the node.

• Address of the left child

• Address of the right child

• Height of the node

Procedure to Compute the Height of a Node

static int Height( Position P )

134
171
if( P == NULL)

return -1 ;

else return P->Height;


}

Procedure to Insert an Element Into the Tree

AvlTree lnsert( ElementType X, AvlTree T)

if ( T = NULL)

/* Create and return a one-node tree * /

T = malloc( sizeof( struct A vi Node ) );

if( T == NULL)FatalError( "Out ofspace!!!" );


else

{ T->Element = X;

T->Height- 0;

T->Left = T->Right = NULL;


}

else

if ( X < T ->Element )

{ T->Left = Insert( X, T->Left)

if ( Height( T ->Left ) - Height( T -> Right) = 2 )

if ( X < T ->Left->Element )
T = SingleRotateWithLeft( T)

else

135
172
T = DoubleRotate WithLeft( T );

else

if (X> T->Element)

{ T->Right = Insert( X, T->Right);

if( Height( T->Right ) - Height( T->Left ) == 2 )

if( X > T->Right->Element)


T = SingleRotateWithRight( T);
else
T = DoubleRotatcWithRight( T);

/* Else X is in the tree already; we'll do nothing * /

T->Height = Max( Height( T->Left), Height{ T>Right» + 1;return T;


}

Procedure for SingleRotation with Left

static Position SingleRotateWithLeft( Position K2 )Position KI;


KI = K2->Left;

K2->Left = KI->Right;

Kl->Right = K2;
K2->Height = Max( Height(K2->LeH), Height(K2->Right) )+1;
Kl->Height = Max( Height(KI->Lell), K2->Height) + 1;

return Kl;

/* New root */

Procedure for Sine:le Rotation with Rie:ht:

static Position SingleRotateWithRight( Position K2 )

173
136

174
{

Position K 1 ; KI = K2->right;
K2->right = K I->Ieft;KI->left = K2:

K2->Height = Max( Height(K2->Left), Hcight(K2->Right) ) +


1; KI->Height = Max( Height(KI->Left), K2->Hcight).oj. I;
return KI; 1* New root */

Proeedure for Double Rotation with Left

static Position DoubleRotateWithLcft( Position K3 )

/* Rotate between K I and K2 * /

K3->Left = SingleRotateWithRight( K3->Left );

/* Rotate between K3 and K2 * / return SingleRotateWithLeft( K3 );


}

Procedure for Double Rotation with Rieht

static Position DoubleRotate WithRight( Position K 3)

/* Rotate between K I and K2 * /

K3->Right = SingleRotateWithRight( K3->Right );

/* Rotate between K3 and K2 >I< / return SingleRotatcWithRight( K3 );


}

175
137

176
3.6 PRIORITY QUEUE

A priority queue is an abstract data type that behaves similarly to the normal queue
except that each element has some priority, i.e., the element with the highest priority would
come first in a priority queue. The priority of the elements in a priority queue will determine
the order in which elements are removed from the priority queue.

The priority queue supports only comparable elements, which means that the
elements are either arranged in an ascending or descending order.

For example, suppose we have some values like 1, 3, 4, 8, 14, 22 inserted in a priority
queue with an ordering imposed on the values is from least to the greatest. Therefore, the 1
number would be having the highest priority while 22 will be having the lowest priority.

Characteristics of a Priority queue

A priority queue is an extension of a queue that contains the following characteristics:


 Every element in a priority queue has some priority associated with it.
 An element with the higher priority will be deleted before the deletion of the lesser
priority.
 If two elements in a priority queue have the same priority, they will be arranged
using the FIFO principle.

Let's understand the priority queue through an example.

We have a priority queue that contains the following values:

1, 3, 4, 8, 14, 22

All the values are arranged in ascending order. Now, we will observe how the priority queue
will look after performing the following operations:

o poll (): This function will remove the highest priority element from the priority queue. In
the above priority queue, the '1' element has the highest priority, so it will be removed
from the priority queue.
o add (2): This function will insert '2' element in a priority queue. As 2 is the smallest
element among all the numbers so it will obtain the highest priority.
o poll (): It will remove '2' element from the priority queue as it has the highest priority
queue.

138
177
o add(5): It will insert 5 element after 4 as 5 is larger than 4 and lesser than 8, so it will
obtain the third highest priority in a priority queue.

Types of Priority Queue

There are two types of priority queue:


Ascending order priority q ueue: In ascending order priority queue, a lower priority number
is given as a higher priority in a priority. For example, we take th e numbers from 1 to 5
arranged in an ascending order like 1,2,3,4,5; therefore, the smallest number, i.e., 1 is given
as the highest priority in a pri ority queue.

Descending order priority queue: In descending order priority queue , a higher priority
number is given as a higher priority in a priority. For example, we take the numbers from 1 to
5 arranged in descending order like 5, 4, 3, 2, 1; therefore, the largest number, i.e., 5 is given
as the highest priority in a pri ority queue.

Representation of priority queue

Now, we will see how to represent the priority queue through a one-wa y list.

178
139

179
We will create the priority que ue by using the list given below in which INFO list contains the
data elements, PRN list contains the priority numbers of each data element available in
the INFO list, and LINK basically contains the address of the next node.

Let's create the priority queue step by step.

In the case of priority que ue, lower priority number is consider ed the higher priority,
i.e., lower priority number = higher priority.

Step 1: In the list, lower priority number is 1, whose data value is 333, so it will be inserted in
the list as shown in the below diagram:

Step 2: After inserting 333, priority number 2 is having a higher priority, and data values
associated with this priority are 222 and 111. So, this data will be inserted based on the FIFO
principle; therefore 222 will be added first and then 111.

Step 3: After inserting the elements of priority 2, the next higher priority number is 4 and data
elements associated with 4 priority numbers are 444, 555, 777. In this case, elements would
be inserted based on the FIFO principle; therefore, 444 will be added first, then 555, and then
777.

Step 4: After inserting the elements of priority 4, the next higher priority number is 5, and the
value associated with priority 5 is 666, so it will be inserted at the end of the queue.

180
140

181
Implementation of Priority Queue

The priority queue can be im plemented in four ways that include arrays, linked list, heap data
structure and binary search tree. The heap data structure is the most efficient way of
implementing the priority queue, so we will implement the priority queue using a heap data
structure in this topic. Now, first we understand the reason why heap is the most efficient way
among all the other data structures.

Analysis of complexities using different implementations

Implementation add Remove peek

Linked list O(1) O(n) O(n)

Binary heap O(logn) O(logn) O(1)


Binary search tree O(logn) O(logn) O(1)

3.6.1 HEAP

A heap is a tree-based data structure that forms a complete binary tree, and satisfies
the heap property. If A is a parent node of B, then A is ordered with respect to the node B for
all nodes A and B

in a heap. It means that the value of the parent node could b more than or equal to
e
the value of the child node, o r the value of the parent node could be less than or equal to the
value of the child node. Therefore, we can say that there are two type sof heaps:

Max heap: The max heap is a heap in which the value of the parent node is greater than the
value of the child nodes.

141

182
Min heap: The min heap is a heap in which the value of the parent node is less than the
value of the child nodes.

Both the heaps are the binary heap, as each has exactly two child nodes.

Priority Queue Operations

The common operations that we can perform on a priority queue are insertion, deletion and
peek. Let's see how we can maintain the heap data structure.

 Inserting the element in a priority queue (max heap)

If we insert an element in a priority queue, it will move to the empty sl ot by looking from top to
bottom and left to right.

If the element is not in a correct place then it is compared with the p arent node; if it is found

out of order, elements are s w


apped. This process continues until the element is placed in a
correct position.

142
183
 Removing the minimum element from the priority queue

As we know that in a max h ap, the maximum element is the root node. When we remove
the root node, it creates an empty slot. The last inserted element will be added in this empty
h and right child, and
slot. Then, this element is compared with the child nodes, i.e., left-c ild
swap with the smaller of the two. It keeps moving down the tree until the heap property is
restored.

Applications of Priority queue

The following are the applications of the priority queue:

 It is used in the Dijkstra's shortest path algorithm.


 It is used in prim's algorithm
 It is used in data compression techniques like Huffman code.
 It is used in heap sort.
 It is also used in
operating system like priority scheduling, load balancing and
interrupt handling.
143
184
Program to create the priority queue using the binary max heap.

#include <stdio.h>
#include <stdio.h>
int heap[40];
int size=-1;

// retrieving the parent node of the child node


int parent(int i)
{

return (i - 1) / 2;
}

// retrieving the left child of the parent node.


int left_child(int i)
{
return i+1;
}
// retrieving the right child of the parent
int right_child(int i)
{
return i+2;
}
// Returning the element having the highest priority
int get_Max()
{
return heap[0];
}
//Returning the element having the minimum priority
int get_Min()
{
return heap[size];
}
// function to move the node up the tree in order to restore the heap property.
void moveUp(int i)
{

185
144

186
while (i > 0)
{
// swapping parent node with a child node
if(heap[parent(i)] < heap[i]) {

int temp;
temp=heap[parent(i)];
heap[parent(i)]=heap[i];
heap[i]=temp;

}
// updating the value of i to i/2
i=i/2;
}
}

//function to move the node down the tree in order to restore the heap property.
void moveDown(int k)
{
int index = k;

// getting the location of the Left Child


int left = left_child(k);

if (left <= size && heap[left] > heap[index])


{ index = left;
}

// getting the location of the Right Child


int right = right_child(k);

if (right <= size && heap[right] > heap[index])


{ index = right;
}

// If k is not equal to index


if (k != index) {
145
187
int temp;

temp=heap[index];
heap[index]=heap[k];
heap[k]=temp;
moveDown(index);
}
}
// Removing the element of maximum priority
void removeMax()
{
int r= heap[0];
heap[0]=heap[size];
size=size-1;
moveDown(0);
}
//inserting the element in a priority queue
void insert(int p)
{
size = size + 1;
heap[size] = p;

// move Up to maintain heap property


moveUp(size);
}
//Removing the element from the priority queue at a given index i.
void delete(int i)
{
heap[i] = heap[0] + 1;

// move the node stored at ith location is shifted to the root node
moveUp(i);
// Removing the node having maximum priority
removeMax();
}
int main()
{
146
188
// Inserting the elements in a priority queue
insert(20);

insert(19);
insert(21);
insert(18);
insert(12);
insert(17);
insert(15);
insert(16);
insert(14);

int i=0;

printf("Elements in a priority queue are : ");


for(int i=0;i<=size;i++)
{
printf("%d ",heap[i]);
}
delete(2); // deleting the element whose index is 2.
printf("\nElements in a priority queue after deleting the element are : ");
for(int i=0;i<=size;i++)
{
printf("%d ",heap[i]);
}
int max=get_Max();
printf("\nThe element which is having the highest priority is %d: ",max);

int min=get_Min();
printf("\nThe element which is having the minimum priority is : %d",min);
return 0;
}

147
189
Output :

Example:

Above tree is satisfying both Ordering property and Structural property according to the Max
Heap data structure.

Operations on Max Heap

The following operations are performed on a Max heap data structure...


 Finding Maximum
 Insertion
 Deletion

Finding Maximum Value Operation in Max Heap

Finding the node which has maximum value in a max heap is very simple. In max heap, the
root node has the maximum value than all other nodes in the max heap. So, directly we can
display root node value as maximum value in max heap.

190
148

191
Insertion Operation in Max Heap

Insertion Operation in max heap is performed as follows.

Step 1: Insert the newNode as last leaf from left to right.

Step 2: Compare newNode value with its Parent node.

Step 3: If newNode value is greater than its parent, then swap both of them.

Step 4: Repeat step 2 and step 3 until newNode value is less than its parent nede
(or)newNode reached to root.

Example:

Consider the above max heap. Insert a new node with value 85.

Step 1: Insert the newNode with value 85 as last leaf from left to right. That means
newNode is added as a right child of node with value 75. After adding max heap is as
follows:

Step 2: Compare newNode value (85) with its Parent node value (75). That means 85 >75.

192
149

193
Step 3: Here new Node value (85) is greater than its parent value (75), then swap both of
them. After swapping, max heap is as follows.

150
194
Step 4: Now, again compare newNode value (85) with its parent nede value (89).

Here, newNode value (85) is smaller than its parent node value (89). So, we stop
insertionprocess. Finally, max heap after insetion of a new node with value 85 is as follows

Deletion Operation in Max Heap

In a max heap, deleting last node is very simple as it is not disturbing max heap properties.
Deleting root node from a max heap is title difficult as it disturbing the max heap properties.
We use the following steps to delete root node from a max heap...

Step 1: Swap the root node with last node in max heapStep 2: Delete last node.
Step 3: Now, compare root value with its left child value.

Step 4: If root value is smaller than its left child, then compare left child with its right sibling.

Else goto Step 6

Step 5: If left child value is larger than its right sibling, then swap root with left child.
otherwise swap root with its right child.
Step 6: If root value is larger than its left child, then compare root value with its right child
value.
Step 7: If root value is smaller than its right child, then swap root with rith child. otherwise
stop the process.
Step 8: Repeat the same until root node is fixed at its exact position.

151
195
Example:

Consider the above max heap. Delete root node (90) from the max heap.

Step 1: Swap the root node (90) with last node 75 in max heap After swapping max heap is
asfollows...

Step 2: Delete last node. Here node with value 90. After deleting node with value 90
fromheap, max heap is as follows.

Step 3: Compare root node (75) with its left child (89).

152
196
Here, root value (75) is smaller than its left child value (89). So, compare left child (89) with
its right sibling (70).

Step 4: Here, left child value (89) is larger than its right sibling (70), So, swap root (75)
withleft child (89).

Step 5: Now, again compare 75 with its left child (36).

Here, node with value 75 is larger than its left child. So, we compare node with value 75
iscompared with its right child 85.

153
197
Step 6: Here, node with value 75 is smaller than its right child (85). So, we swap both of
them. After swapping max heap is as follows.

Step 7: Now, compare node with value 75 with its left child (15).

154
198
Here, node with value 75 is larger than its left child (15) and it does not have right child. So
we stop the process.

Finally, max heap after deleting root node (90) is as follows...

3.7 B-TREE

B-Tree is a self-balancing search tree. B Trees are multi-way trees. That is each node
contains a set of keys and pointers. A B Tree with four keys and five pointers represents the
minimum sizeof a B Tree node. B Trees are dynamic. That is, the height of the tree grows and
contracts as records are added and deleted.
In most of the other self-balancing search trees (like AVL and Red Black Trees), it is
assumed that everything is in main memory. To understand use of B-Trees, we must think of
huge amountof data that cannot fit in main memory. When the number of keys is high, the data
is read from disk in the form of blocks. Disk access time is very high compared to main
memory access time. The main idea of using B-Trees is to reduce the number of disk
accesses. Most of the tree operations (search, insert, delete, max, min, ..etc ) require O(h) disk
accesses where h is height of the tree. B-tree is a fat tree. Height of B-Trees is kept low by
putting maximum possible keys in a B-Tree node. Generally, a B-Tree node size is kept equal
to the disk block size. Since h is low for B-Tree, total disk accesses for most of the operations
are reduced significantly compared to balanced Binary Search Trees like AVL Tree, Red Black
Tree, ..etc.
Properties of B-Tree
 All leaves are at the same level.

155
199
 B-Tree is defined by the term minimum degree ‘t‘. The value of ‘t‘ depends upon disk
block size.
 Every node except the root must contain at least t-1 keys. The root may contain a
minimum of 1 key.
 All nodes (including root) may contain at most (2*t – 1) keys.
 Number of children of a node is equal to the number of keys in it plus 1.
 All keys of a node are sorted in increasing order. The child between two
keys k1 and k2 contains all keys in the range from k1 and k2.
 B-Tree grows and shrinks from the root which is unlike Binary Search Tree. Binary
Search Trees grow downward and also shrink from downward.
 Like other balanced Binary Search Trees, the time complexity to search, insert, and
delete is O(log n).

Following is an example B-Tree of minimum degree 3. Note that in practical B-


Trees, thevalue of minimum degree is much more than 3.

Search
Search is similar to search in Binary Search Tree. Let the key to be searched be k. We start
from root and recursively traverse down. For every visited non-leaf node, if the node has key,
we simply return the node. Otherwise we recur down to the appropriate child (The child
which isjust before the first greater key) of the node. If we reach a leaf node and don’t find k in
the leaf node, we return NULL.

156
200
Traverse
Traversal is also similar to Inorder traversal of Binary Tree. We start from the leftmost child,
recursively print the leftmost child, then repeat the same process for remaining children
andkeys. In the end, recursively print the rightmost child.

3.8 B+ TREES
A B+ Tree combines features of B Trees. It contains index pages and data pages. The
data pages always appear as leaf nodes in the tree. The root node and intermediate
nodes are always

index pages. The index pages in a B+ tree are constructed through the process of inserting
and deleting records.
Thus, B+ trees grow and contract like their B Tree counterparts. The contents and the
number of index pages reflects this growth and shrinkage. B+ Trees and B Trees use a "fill
factor" to control the growth and the shrinkage. A 50% fill factor would be the minimum for any
B+ or B tree.
As our example we use the smallest page structure. This means that our B+ tree
conforms to the following guidelines.

As this table indicates each page must have a minimum of two keys. The root page may
violate this rule. The following table shows a B+ tree. As the example illustrates this tree
doesnot have a full index page. (We have room for one more key and pointer in the root page.)
In addition, one of the data pages contains empty slots.

157
201
Adding Records to a B+ Tree

The key value determines a record's placement in a B+ tree. The leaf pages are
maintained in sequential order AND a doubly linked list (not shown) connects each leaf page
with its sibling page(s). This doubly linked list speeds data movement as the pages grow
and contract. We must consider three scenarios when we add a record to a B+ tree. Each
scenario causes a different action in the insert algorithm.

The scenarios are:

202
158

203
Illustrations of the Insert Algorithm

The following examples illlustrate each of the insert scenarios. We begin with the
simplest scenario: inserting a record into a leaf page that is not full. Since only the leaf node
containing 25and 30 contains expansion room, we're going to insert a record with a key value
of 28 into the B+ tree. The

following figures shows the result of this addition.

Adding a record when the leaf page is full but the index page is not.

Next, we're going to insert a record with a key value of 70 into our B+ tree. This
recordshould go in the leaf page containing 50, 55, 60, and 65. Unfortunately this page is full.
This means that we must split the page as follows:

The middle key of 60 is placed in the index page between 50 and 75. The following
table shows the B+ tree after the addition of 70.

204
159

205
Adding a record when both the leaf page and the index page are full.

As our last example, we're going to add a record containing a key value of 95 to our B+
tree. This record belongs in the page containing 75, 80, 85, and 90. Since this page is full we
split it into two pages:

The middle key, 85, rises to the index page. Unfortunately, the index page is also full,
so we split the index page:

The following table illustrates the addition of the record containing 95 to the B+ tree.

206
160

207
Rotation

B+ trees can incorporate rotation to reduce the number of page splits. A rotation occurs
when a leaf page is full, but one of its sibling pages is not full. Rather than splitting the leaf
page, we move a record to its sibling, adjusting the indices as necessary. Typically, the left
sibling is checked first (if it exists) and then the right sibling. As an example, consider the B+
tree before the addition of the record containing a key of 70. As previously stated this record
belongs in the leaf node containing 50 55 60 65. Notice that this node is full, but its left sibling
is not.

Using rotation we shift the record with the lowest key to its sibling. Since this key
appeared in the index page we also modify the index page. The new B+ tree appears in the
following table.

Deleting Keys from a B+ Tree

We must consider three scenarios when we delete a record from a B+ tree. Each

scenario causes a different action in the delete algorithm. The scenarios are:

208
161

209
As our example, we consider the B+ tree after we added 95 as a key. As a refresher

this tree isprinted in the following table.

Delete 70 from the B+ Tree

We begin by deleting the record with key 70 from the B+ tree. This record is in a leaf

page containing 60, 65 and 70. This page will contain 2 records after the deletion. Since our fill

factor is 50% or (2 records) we simply delete 70 from the leaf node. The following table shows

the B+ tree after the deletion.

162
210
Delete 25 from the B+ Tree

Next, we delete the record containing 25 from the B+ tree. This record is found in the
leaf node containing 25, 28, and 30. The fill factor will be 50% after the deletion; however, 25
appears in the index page. Thus, when we delete 25 we must replace it with 28 in the index
page.The following table shows the B+ tree after this deletion.

211
163

212
Delete 60 from the B+ Tree
As our last example, we're going to delete 60 from the B+ tree. This deletion is interesting for
several resasons: The leaf page containing 60 (60 65) will be below the fill factor after the
deletion. Thus, we must combine leaf pages. 1. With recombined pages, the index page will
be reduced by one key. Hence, it will also fall below the fill factor. Thus, we must combine
index pages. 2. 3. Sixty appears as the only key in the root index page. Obviously, it will be
removed with the deletion. The following table
shows the B+ tree after the deletion of 60.

164
213
QUESTION BANK

PART A

1. Which tree traversal method visits the root node first, then the left subtree, and
finally the right subtree?
a) In-order
b) Pre-order
c) Post-order
d) Level-order

2. In a binary search tree, where are nodes with values greater than the root
typically found?
a) Left subtree
b) Right subtree
c) Both left and right subtrees
d) None of the above

3. What is the height of an AVL tree with 'n' nodes in the worst case?
a) O(n)
b) O(log n)
c) O(n log n)
d) O(1)

4. Which traversal method can be used to get the elements of a binary search tree
in ascending order?

a) Pre-order
b) In-order
c) Post-order
d) Level-order

5. What type of binary tree is used to implement a priority queue?

a) Binary Search Tree


b) AVL Tree
c) Binary Heap

165
214
d) Expression Tree

6. In a max heap, where is the maximum element always found?


a) Leaf node
b) Root node
c) Leftmost node
d) Rightmost node

7. What property does a binary search tree have?

a) All left descendants are greater than the root


b) All right descendants are less than the root
c) All left descendants are less than or equal to the root
d) The left and right subtrees have equal height

8. Which operation is not affected by the self-balancing property of an AVL tree?

a) Insertion
b) Deletion
c) Searching
d) Rotation

9. What is the time complexity of inserting an element into a binary heap?

a) O(1)
b) O(log n)
c) O(n)
d) O(n log n)

10. In a complete binary tree, how many children does a node at index 'i' have?

a) 0
b) 1
c) 2
d) Depends on the position

11. Which of the following is not a type of tree traversal?

215
166

216
a) In-order
b) Out-order
c) Pre-order
d) Post-order

12. An expression tree is used primarily for which purpose?

a) To evaluate logical expressions


b) To store elements in sorted order
c) To represent hierarchical data
d) To evaluate arithmetic expressions

13. Which tree data structure is balanced by ensuring the height difference
between left and right subtrees is at most 1?

a) Binary Search Tree


b) AVL Tree
c) Binary Heap
d) Expression Tree

14. What is the main advantage of using a binary search tree over a sorted array?

a) Faster search time


b) Dynamic insertions and deletions
c) Reduced memory usage
d) Simpler to implement

15. In a binary heap, which of the following operations can be performed


efficiently?

a) Insertion
b) Deletion
c) Finding the maximum or minimum element
d) All of the above

16. A tree is a data structure that consists of nodes with a hierarchical


relationship.

217
167

218
Answer: hierarchical

17. In a traversal, each node is visited in the following order: left subtree, root,
right subtree.

Answer: in-order

18. A binary tree is a tree data structure in which each node has at most
children.

Answer: two

19. An expression tree is a binary tree where each leaf node represents an
and each internal node represents an operator.

Answer: operand

20. In a binary search tree, the left child of a node contains only nodes with values
than the parent node’s value.

Answer: less

21. An AVL tree is a self-balancing binary search tree where the difference in heights of
left and right subtrees cannot be more than for all nodes.

Answer: one

22. A priority queue can be efficiently implemented using a data structure.

Answer: heap

------------------------------------------------------------------------------------------------------------------

PART B

1. If the depth of the binary tree is k, the maximum number of nodes in the binary tree is
2k -.Justify

2. For the given binary search tree, if we remove the root and replace it with something
from left sub tree. What will be the value of the new root? Justify your answer.

168
219
3. Define a fully binary tree. Give an example.

4. Create an expression tree for the expression. a*(b+c)+((d+e*f)*g)

5. How does the AVL tree differ from binary search tree?

6. What are the various rotations in AVL trees?

7. List the applications of trees.

8. What are threaded binary trees? Give its advantages

9. Define balance factor of AVL Tree.

10. How do we calculate the balance factor for each node in an AVL tree?

11. Simulate the result of inserting 3,1,4,6,2,8,9 into an initially empty AVL Tree.

12. Discuss with respect to following tree:

a) List the siblings for node E.

b) Compute the height

13. Number the following binary tree to traverse it in

i. Preorder

ii. ii. Inorder

220
169

221
14. Explain why binary search cannot be performed on a linked list.

15. List out the various operations that can be performed on B-trees

16. List out the properties of B+ -Trees

17. Illustrate the steps in the construction of a heap of records with the following key
values:12,33,67,8,7,80,5,23

18. Analyze the properties of binary heap.

19. Define a heap and show how it can be used to represent a priority queue.

20. Create an expression tree for the given expression: a*(b+c)+((d+e*f)*g). (Nov/Dec
2023)

21. Define binary heap. (Nov/Dec 2024)

PART C

1. Write an algorithm for preorder, in order and post order traversal of a binary tree.

2. Explain the following operations on a binary search tree with suitable algorithms

i. Find a node

ii. ii. Find the minimum and maximum elements of binary search tree

3. i.Write short notes on threaded binary tree

ii. Describe an iterative algorithm to traverse a tree in preorder

4. Write an algorithm for inserting and deleting a node in a binary search tree.

5. Discuss in detail the various methods in which a binary tree can be represented.
Discuss the advantage and disadvantage of each method

6. i. Explain the B+ tree and its properties with an Example

170
222
ii. What are the steps to convert general tree to binary tree?

7. i. Construct B Tree to insert the following key elements(order of the tree is 3)


5,2,13,3,45,72,4,6,9,22

ii. Draw a B Tree of order 6

8. i. Discuss how to insert an element in a AVL tree, Explain with algorithm.

ii. Explain how deletion can take place in AVL trees with suitable algorithms

9. i. What are AVL trees? Describe the different rotations defined for AVL tree.

ii. Insert the following elements step by step in sequence into an empty AVL tree
15,18,20,21,28,2330,26

10. i. Point out the operations of B-tree using 2-3 tree.

ii. Explain the operations of threaded binary tree.

11. Discuss the different traversal technique in binary tree with suitable algorithms and
examples?

12. Explain the construction of expression tree with example. Give the applications of
trees

13. i. Show the result of inserting 15,17,6,19,11,10,13,20,8,14,12 one at a time into an


initially empty binary min heap.

ii. Show the result of performing three delete min operations in the final binary min
heap obtained.

14. i. Illustrate How delete operation performed on binary heap?

ii. Write suitable operations for percolate up and percolate down operations in a binary
heap.

15. Consider the binary search tree given below. Find the result of in-order, pre-order, and
post-order traversals. Show the deletion of the root node Insert 11, 22, 33, 44, 55, 66,
and 77 in the tree

223
171

224
16. i. Compare B trees with B+ trees.

ii. Create a B+ tree of order 5 for the following data arriving in sequence: 90, 27, 7, 9,
18, 21, 3, 4, 16, 11, 21, 72

17. i. Draw B – Tree pf order m = 5 for the keys {K, O,S,V,MF,B,G,T,U,W}

ii. Delete the keys K and G in order.

iii. Justify the number of splits needed for inserts / delete with proper reasons.

18. Construct AVL tree for the followings after rotation

19. Write an algorithm to insert and delete a node in a binary search tree. (Nov/Dec 2023)

20. Explain the Binary search tree that results from inserting the following values into an
initially empty Binary search tree in the following order {50, 27, 16, 88, 34, 65, 52, 77,
93, 4, 12, 29, 44, 92}.(Nov/Dec 2023)

21. Build an AVL tree by inserting the following nodes one by one {1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12} into an initially empty AVL tree. Find the type of rotations required for
balancing the AVL tree. (Nov/Dec 2023)

22. Identify the birnary tree traversals with an algorithms. Apply the algorithms for the

225
172

226
following tree and findout all traversals for the binary tree. (Nov/Dec 2024)

23. Construct AVL tree (step by step) for the following data. (Nov/Dec 2024)
21,26,30,9,4,14,28,18,15,10,2,3,7
24. Identify the applications of AVL tree and Binary search tree. (Nov/Dec 2024)

173
227
UNIT IV
GRAPHS

Graph Definition – Representation of Graphs – Types of Graph – Breadth-first traversal –


Depth-first traversal – Topological Sort – Dijkstra’s algorithm – Minimum Spanning Tree
– Prim’s algorithm – Kruskal’s.

4.1 GRAPHS

Definition :

A Brief Introduction to Graphs

Graphs are non-linear data structures representing the "connections" between the elements.
These elements are known as the Vertices, and the lines or arcs that connect any two vertices
in the graph are known as the Edges. More formally, a Graph comprises a set of Vertices
(V) and a set of Edges (E). The Graph is denoted by G(V, E).

Components of a Graph

1. Vertices:Vertices are the basic units of the graph used to represent real-life objects,
persons, or entities. Sometimes, vertices are also known as Nodes.

2. Edges:Edges are drawn or used to connect two vertices of the graph. Sometimes, edges
are also known as Arcs.

The following figure shows a graphical representation of a Graph:

fig.4.3 Graphical Representation of a Graph

228
In the above figure, the Vertices/Nodes are denoted with Colored Circles, and the Edges are
denoted with the lines connecting the nodes.

Applications of the Graphs

Graphs are used to solve many real-life problems. Graphs are utilized to represent the
networks. These networks may include telephone or circuit networks or paths in a city.

For example, we could use Graphs to design a transportation network model where the
vertices display the facilities that send or receive the products, and the edges represent roads
or paths connecting them. The following is a pictorial representation of the same:

fig.4.3 Pictorial Representation of Transportation Network

Graphs are also utilized in different Social Media Platforms like LinkedIn, Facebook, Twitter,
and more. For example, Platforms like Facebook use Graphs to store the data of their users
where every person is indicated with a vertex, and each of them is a structure containing
information like Person ID, Name, Gender, Address, etc.

Types of Graphs

The Graphs can be categorized into two types:

1. Undirected Graph

229
2. Directed Graph

Undirected Graph: A Graph with edges that do not have a direction is termed an Undirected
Graph. The edges of this graph imply a two-way relationship in which each edge can be
traversed in both directions. The following figure displays a simple undirected graph with four
nodes and five edges.

fig.4.3 A Simple Undirected Graph

Directed Graph: A Graph with edges with direction is termed a Directed Graph. The edges of
this graph imply a one-way relationship in which each edge can only be traversed in a single
direction. The following figure displays a simple directed graph with four nodes and five edges.

fig.4.3 A Simple Directed Graph

The absolute length, position, or orientation of the edges in a graph illustration


characteristically does not have meaning. In other words, we can visualize the same graph in
different ways by rearranging the vertices or distorting the edges if the underlying structure of
the graph does not alter.

Weighted Graphs:

230
A Graph is said to be Weighted if each edge is assigned a 'weight'. The weight of an
edge can denote distance, time, or anything that models the 'connection' between the pair of
vertices it connects.
For instance, we can observe a blue number next to each edge in the following figure of
the Weighted Graph. This number is utilized to signify the weight of the corresponding edge.

fig.4.3 An Example of a Weighted Graph

Undirected Graph

A graph with only undirected edges is said to be undirected graph.

Directed Graph

A graph with only directed edges is said to be directed graph.

Mixed Graph

A graph with undirected and directed edges is said to be mixed graph.

End vertices or Endpoints

The two vertices joined by an edge are called the end vertices (or endpoints) of the edge.

Origin

If an edge is directed, its first endpoint is said to be origin of it.

Destination

If an edge is directed, its first endpoint is said to be origin of it and the other endpoint is
saidto be the destination of the edge.
Adjacent

If there is an edge between vertices A and B then both A and B are said to be adjacent. In

231
other words, Two vertices A and B are said to be adjacent if there is an edge whose end
vertices are A and B.
Incident

An edge is said to be incident on a vertex if the vertex is one of the endpoints of that edge.

Outgoing Edge

A directed edge is said to be outgoing edge on its orign vertex.

Incoming Edge

A directed edge is said to be incoming edge on its destination vertex.

Degree

Total number of edges connected to a vertex is said to be degree of that vertex.

Indegree

Total number of incoming edges connected to a vertex is said to be indegree of that vertex

Outdegree

Total number of outgoing edges connected to a vertex is said to be outdegree of that vertex.

Parallel Edges or Multiple Edges

If there are two undirected edges to have the same end vertices, and for two directed edges
to have the same origin and the same destination. Such edges are called parallel edges or
multiple edges.
Self-Loop

An edge (undirected or directed) is a self-loop if its two endpoints coincide.

Simple Graph

A graph is said to be simple if there are no parallel and self-loop edges.

232
Path

A path is a sequence of alternating vertices and edges that starts at a vertex and ends
at avertex such that each edge is incident to its predecessor and successor vertex.

4.1.1 REPRESENTATIONS OF GRAPH

Graph data structure is represented using following representations...

 Adjacency Matrix

 Adjacency List

Adjacency Matrix

Adjacency Matrix is a 2D array of size V x V where V is the number of vertices in a


graph. Let the 2D array be adj[][], a slot adj[i][j] = 1 indicates that there is an edge from vertex i
to vertex j. Adjacency matrix for undirected graph is always symmetric. Adjacency Matrix is
also used to represent weighted graphs. If adj[i][j] = w, then there is an edge from vertex i to
vertex j with weight w.
For example, consider the following undirected graph representation...

Directed graph representation...

233
Adjacency List

An array of linked lists is used. Size of the array is equal to number of vertices. Let the array
be array[]. An entry array[i] represents the linked list of vertices adjacent to the ith vertex. This
representation can also be used to represent a weighted graph. The weights of edges can be
stored in nodes of linked lists. Following is adjacency list representation of the above graph.

4.1.2 TYPES OF GRAPHS

There are various types of graphs depending upon the number of vertices, number of
edges, interconnectivity, and their overall structure. We will discuss only a certain few
important types of graphs in this chapter.
Null Graph

A graph having no edges is called a Null Graph.

Example:

234
In the above graph, there are three vertices named ‘a’, ‘b’, and ‘c’, but there are no edges
among them. Hence it is a Null Graph.
Trivial Graph

A graph with only one vertex is called a Trivial Graph.

Example:

In the above shown graph, there is only one vertex ‘a’ with no other edges. Hence it is a Trivial
graph.

Non-Directed Graph

A non-directed graph contains edges but the edges are not directed ones.

Example:

In this graph, ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, ‘g’ are the vertices, and ‘ab’, ‘bc’, ‘cd’, ‘da’, ‘ag’, ‘gf’, ‘ef’
are the edges of the graph. Since it is a non-directed graph, the edges ‘ab’ and ‘ba’ are same.
Similarly other edges also considered in the same way.

Directed Graph

235
In a directed graph, each edge has a direction.

Example:

In the above graph, we have seven vertices ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, and ‘g’, and eight
edges ‘ab’, ‘cb’, ‘dc’, ‘ad’, ec’, ‘fe’, ‘gf’, and ‘ga’. As it is a directed graph, each edge
bears an arrow mark that
shows its direction. Note that in a directed graph, ‘ab’ is
different from ‘ba’.

Simple Graph

A graph with no loops and no parallel edges is called a simple graph.

 The maximum number of edges possible in a single graph with ‘n’ vertices
is nC2 where nC2 = n(n – 1)/2.
 The number of simple graphs possible with ‘n’ vertices = 2nc2 = 2n(n-1)/2.

Example:

In the following graph, there are 3 vertices with 3 edges which is maximum excluding
theparallel edges and loops. This can be proved by using the above formulae.

The maximum number of edges with n=3 vertices:

236
n
C2 = n(n–1)/2

= 3(3–1)/2

= 6/2

= 3 edges

The maximum number of simple graphs with n=3 vertices:2nC = 2n(n-1)/2

= 23 (3-1)/2
2

= 23

=8

These 8 graphs are as shown below:

Connected Graph

A graph G is said to be connected if there exists a path between every pair of vertices.
There should be at least one edge for every vertex in the graph. So that we can say that it is
connected to some other vertex at the other side of the edge.
Example:

In the following graph, each vertex has its own edge connected to other edge. Hence it is
aconnected graph.

237
Disconnected Graph

A graph G is disconnected, if it does not contain at least two connected vertices.

Example 1:

The following graph is an example of a Disconnected Graph, where there are two
components, one with ‘a’, ‘b’, ‘c’, ‘d’ vertices and another with ‘e’, ’f’, ‘g’, ‘h’ vertices.

The two components are independent and not connected to each other. Hence it is called
disconnected graph.
Example 2:

In this example, there are two independent components, a-b-f-e and c-d, which are not
connected to each other. Hence this is a disconnected graph.

238
Regular Graph

A graph G is said to be regular, if all its vertices have the same degree. In a graph, if the
degree of each vertex is ‘k’, then the graph is called a ‘k-regular graph’.
Example:

In the following graphs, all the vertices have the same degree. So these graphs are
calledregular graphs.

In both the graphs, all the vertices have degree 2. They are called 2-Regular Graphs.

Complete Graph

A simple graph with ‘n’ mutual vertices is called a complete graph and it is denoted by ‘Kn’.
In the graph, a vertex should have edges with all other vertices, then it called a
completegraph.
In other words, if a vertex is connected to all other vertices in a graph, then it is called a
complete graph.

Example:

In the following graphs, each vertex in the graph is connected with all the remaining
verticesin the graph except b itself.

239
In graph I,
a b c
a Not Connected Connected Connected
b Connected Not Connected Connected
c Connected Connected Not Connected

In graph II,

p q r s
p Not Connected Connected Connected Connected
q Connected Not Connected Connected Connected
r Connected Connected Not Connected Connected
s Connected Connected Connected Not Connected

Cycle Graph

A simple graph with ‘n’ vertices (n >= 3) and ‘n’ edges is called a cycle graph if all its
edgesform a cycle of length ‘n’.
If the degree of each vertex in the graph is two, then it is called a Cycle Graph.

Notation − Cn

Example:

Take a look at the following graphs:

 Graph I has 3 vertices with 3 edges which is forming a cycle ‘ab-bc-ca’.

 Graph II has 4 vertices with 4 edges which is forming a cycle ‘pq-qs-sr-rp’.

 Graph III has 5 vertices with 5 edges which is forming a cycle ‘ik-km-ml-lj-ji’.

240
Hence all the given graphs are cycle graphs.

Wheel Graph

A wheel graph is obtained from a cycle graph Cn-1 by adding a new vertex. That new vertex
iscalled a Hub which is connected to all the vertices of Cn.
Notation − Wn

No. of edges in Wn = No. of edges from hub to all other vertices + No. of edges from all other nodes
in cycle graph without a hub.
= (n–1) + (n–1)

= 2(n–1)

Example:

Take a look at the following graphs. They are all wheel graphs.

In graph I, it is obtained from C3 by adding an vertex at the middle named as ‘d’. It is


denotedas W4.
Number of edges in W4 = 2(n-1) = 2(3) = 6

In graph II, it is obtained from C4 by adding a vertex at the middle named as ‘t’. It is
denotedas W5.
Number of edges in W5 = 2(n-1) = 2(4) = 8

In graph III, it is obtained from C6 by adding a vertex at the middle named as ‘o’. It is
denoted as W7.
Number of edges in W4 = 2(n-1) = 2(6) = 12

241
Cyclic Graph

A graph with at least one cycle is called a cyclic graph.

Example:

In the above example graph, we have two cycles a-b-c-d-a and c-f-g-e-c. Hence it is called
acyclic graph.

Acyclic Graph

A graph with no cycles is called an acyclic graph.

Example:

In the above example graph, we do not have any cycles. Hence it is a non-cyclic graph.

Bipartite Graph

A simple graph G = (V, E) with vertex partition V = {V1, V2} is called a bipartite graph if
every edge of E joins a vertex in V1 to a vertex in V2.
In general, a Bipertite graph has two sets of vertices, let us say, V1 and V2, and if an edge
isdrawn, it should connect any vertex in set V1 to any vertex in set V2.

242
Example:

In this graph, you can observe two sets of vertices − V1 and V2. Here, two edges named ‘ae’
and ‘bd’ are connecting the vertices of two sets V1 and V2.

Complete Bipartite Graph

A bipartite graph ‘G’, G = (V, E) with partition V = {V1, V2} is said to be a complete
bipartitegraph if every vertex in V1 is connected to every vertex of V2.
In general, a complete bipartite graph connects each vertex from set V1 to each vertex
from setV2.
Example:

The following graph is a complete bipartite graph because it has edges connecting each
vertexfrom set V1 to each vertex from set V2.

If |V1| = m and |V2| = n, then the complete bipartite graph is denoted by Km, n.

 Km,n has (m+n) vertices and (mn) edges.

243
 Km,n is a regular graph if m=n.

In general, a complete bipartite graph is not a complete graph. Km,n is a complete graph if
m=n=1.
The maximum number of edges in a bipartite graph with n vertices is

n4
2

If n=10, k5, 5= ⌊n24⌋ = ⌊1024⌋


= 25
Similarly K6, 4=24

K7, 3=21

K8, 2=16

K9, 1=9

If n=9, k5, 4 = ⌊n24⌋ = ⌊924 = 20Similarly K6, 3=18


K7, 2=14

K8, 1=8

‘G’ is a bipartite graph if ‘G’ has no cycles of odd length. A special case of bipartite graph
isa star graph.
Star Graph

A complete bipartite graph of the form K1, n-1 is a star graph with n-vertices. A star graph
is a complete bipartite graph if a single vertex belongs to one set and all the remaining vertices
belong to the other set.

Example:

244
In the above graphs, out of ‘n’ vertices, all the ‘n–1’ vertices are connected to a single vertex.

Hence it is in the form of K1, n-1 which are star graphs.

Complement of a Graph

Let 'G−' be a simple graph with some vertices as that of ‘G’ and an edge {U, V} is

presentin 'G−', if the edge is not present in G. It means, two vertices are adjacent in 'G−' if the

two vertices are not adjacent in G.

If the edges that exist in graph I are absent in another graph II, and if both graph I and

graph II are combined together to form a complete graph, then graph I and graph II are called

complements of each other.

Example:

In the following example, graph-I has two edges ‘cd’ and ‘bd’. Its complement graph-II has
four edges.

Note that the edges in graph-I are not present in graph-II and vice versa. Hence, the
combination of both the graphs gives a complete graph of ‘n’ vertices.

Note − A combination of two complementary graphs gives a complete graph. If ‘G’ is any
simple graph, then
|E(G)| + |E('G-')| = |E(Kn)|, where n = number of vertices in the graph.

245
Example:

Let ‘G’ be a simple gra h with nine vertices and twelve edges, find the number of
edgesin 'G-'.
You have, |E(G)| + |E('G-')| = |E(Kn)|

12 + |E('G-')| = 9(9-1) / 2 = 9C2

12 + |E('G-')| = 36

|E('G-')| = 24

‘G’ is a simple graph with 40 edges and its complement 'G−' has 38 edges. Find the
numberof vertices in the graph G or 'G−'.
Let the number of vertices in the graph be ‘n’.We have, |E(G)| + |E('G-')| = |E(Kn)|
40 + 38 = n(n-1)2

156 = n(n-1)

13(12) = n(n-1)

n = 13

4.2 GRAPH TRAVERSALS

Graph traversal is technique used for searching a vertex in a graph. The graph traversal is

alsoused to decide the order of vertices to be visit in the search proce s.

A graph traversal finds the egdes to be used in the search process without creating loops

thatmeans using graph traversal we visit all verticces of graph without getting into looping path.

There are two graph traversal techniques and they are as follows...

1. DFS (Depth First Search)

2. BFS (Breadth First Search)

4.2.1 DEPTH FIRST SEARCH

The DFS algorithm is a recursive algorithm that uses the idea of backtracking. It

246
involves exhaustive searches of all the nodes by going ahead, if possible, else by

backtracking.

Here, the word backtrack means that when you are moving forward and there are no

more nodes along the current path, you move backwards on the same path to find nodes to

traverse. All the nodes will be visited on the current path till all the unvisited nodes have been

traversed after which the next path will be selected.

This recursive nature of DFS can be implemented using stacks. The basic idea is as

follows: Pick a starting node and push all its adjacent nodes into a stack.

Pop a node from stack to select the next node to visit and push all its adjacent nodes

into astack. Repeat this process until the stack is empty.

However, ensure that the nodes that are visited are marked. This will prevent you

fromvisiting the same node more than once.

If you do not mark the nodes that are visited and you visit the same node more than

once, youmay end up in an infinite loop.

247
Pseudocode

DFS-iterative (G, s): //Where G is graph and s is source vertex


let S be stack
S.push( s ) //Inserting s in stack
mark s as visited.
while ( S is not empty):
//Pop a vertex from stack to visit next
v = S.top( )
S.pop( )
//Push all the neighbours of v in stack that are not visited
for all neighbours w of v in Graph G:
if w is not visited :
S.push( w )
mark w as
visited
DFS-recursive(G, s):
mark s as visited
for all neighbours w of s in Graph G:

Example :

Step Traversal Description

Initialize the stack.

248
2
Mark S as visited and put it onto the stack. Explore
any unvisited adjacent node from S. We have three
nodes and we can pick any ofthem. For this example,
we shall take the node in an alphabetical order.

3
Mark A as visited and put it onto the stack. Explore
any unvisited adjacent node from A. Both Sand D are
adjacent to A but we are concerned for unvisited
nodes only.

4 Visit D and mark it as visited and put onto the


stack. Here, we have B and C nodes, which are
adjacent to D and both are unvisited. However, we
shall again choose in an alphabetical order.

5
We choose B, mark it as visited and put onto
the stack. Here Bdoes not have any unvisited
adjacent node. So, wepop Bfrom the stack.

6 We check the stack top for return to the previous


node and check if it has any unvisited nodes. Here,
we find D to be on the top of the stack.

7
Only unvisited adjacent node is from D is C now. So
we visit C,mark it as visited and put it onto the stack.

249
As C does not have any unvisited adjacent node so we keep popping the stack until we find
a node that has an unvisited adjacent node. In this case, there's none and we keep popping
until thestack is empty.
Implementation in C

#include <stdio.h>

#include <stdlib.h>

/*ADJACENCY MATRIX*/

int source,V,E,time,visited[20],G[20][20];void DFS(int i)


{

int j; visited[i]=1;
printf(" %d->",i+1);
for(j=0;j<V;j++)
{

if(G[i][j]==1&&visited[j]==0)DFS(j);
}

int main()

int i,j,v1,v2; printf("\t\t\tGraphs\n");

printf("Enter the no of edges:");

scanf("%d",&E);
printf("Enter the no of vertices:");
scanf("%d",&V); for(i=0;i<V;i++)
{

for(j=0;j<V;j++) G[i]

[j]=0;

250
}

/*creating edges :P*/

for(i=0;i<E;i++)
{

printf("Enter the edges (format: V1 V2) : "); scanf("%d

%d",&v1,&v2);

G[v1-1][v2-1]=1;
}
for(i=0;i<V;i++)

for(j=0;j<V;j++)

printf(" %d ",G[i][j]);

printf("\n");
}
printf("Enter the source: ");

scanf("%d",&source);

DFS(source-1);
return 0;

251
4.2.2 BREADTH FIRST SEARCH

There are many ways to traverse graphs. BFS is the most commonly used approach.

BFS is a traversing algorithm where we start traversing from a selected node (source or
starting node) and traverse the graph laye rwise thus exploring the neighbour nodes (nodes
which are directly connected to source node).

We must then move towards the next-level neighbour nodes.As the name BFS suggests,
1. First move horizontally and visit all the nodes of the current layer

2. Move to the next layer

To make this process easy, use a queue to store the node and mark it as 'visited' until

all its neighbours (vertices that are directly connected to it) are marked.

The queue follows the First In First Out (FIFO) queuing method, and therefore, the

neigbors of the node will be visited in the order in which they were inserted in the node i.e. the

node that was inserted first will be visited first, and so on.

252
Pseudocode

BFS (G, s) //Where G is the graph and s is the source node


let Q be queue.
Q.enqueue( s ) //Inserting s in queue until all its neighbour vertices are marked.

mark s as visited.
while ( Q is not empty)
//Removing that vertex from queue,whose neighbour will be visited
now
v = Q.dequeue( )

//processing all the neighbours of v


for all neighbours w of v in Graph G
if w is not visited
Q.enqueue( w ) //Stores w in Q to further visit its neighbour
mark w as visited.

Example:

Step Traversal Description

Initialize the queue.

We start from visiting S(starting node),


and mark it as visited.

253
3
We then see an unvisited adjacent node
from S. In this example, we have three
nodes but alphabetically we choose A,
mark it as visited and enqueue it.

4
Next, the unvisited adjacent node from
S is B. We mark it asvisited and
enqueue it.

5
Next, the unvisited adjacent node from S
is C. We mark it asvisited and enqueue it.

6
Now, S is left with no unvisited adjacent
nodes. So, we dequeueand find A.

7
From A we have D as unvisited adjacent
node. We mark it asvisited and enqueue
it.

At this stage, we are left with no unmarked (unvisited) nodes. But as per the algorithm
we keep on dequeuing in order to get all unvisited nodes. When the queue gets emptied, the
programis over.

254
Implementation in C

#include<stdio.h>

int G[20][20],q[20],visited[20],n,front = 1, rear = 0 ;

void bfs(int v)

int i; visited[v] = 1; for(i=1;i<=n;i+


+)

if(G[v][i] && !visited[i])q[++rear]=i;


if(front <= rear)

bfs(q[front++]);

int main()

int v,i,j;

printf("\n Enter the number of vertices:");

scanf("%d",&n);
for(i=1;i<=n;i++)

{ q[i]=

0;

visited[i]=0;

printf("\n Enter graph data in matrix form:\n");

for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
scanf("%d",&G[i][j]);printf("\n Enter the starting vertex:");

255
scanf("%d",&v);
bfs(v);

printf("\n The nodes which are reachable are:\n");for(i=1;i<=n;i++)


if(visited[i])
printf("%d\t",i);
else
printf("\n %d is not reachable",i);
return 0;
}

4.3 TOPOLOGICAL SORTING

Definition

Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of vertices such
that for every directed edge uv, vertex u comes before v in the ordering. Topological Sorting for
agraph is not possible if the graph is not a DAG.
For example, a topological sorting of the following graph is “5 4 2 3 1 0”. There can be more
than one topological sorting for a graph. For example, another topological sorting of the
following graph is “4 5 2 3 1 0”. The first vertex in topological sorting is always a vertex with in-
degree as 0 (a vertex with no in-coming edges).

256
Topological Sort Example

Consider the following graph “D”. Find the topological ordering of this graph “G”.

Step 1: Write in-degree of all vertices:

Vertex in-degree
1 0
2 1

3 1

4 2

Step 2: Write the vertex which has in-degree 0 (zero) in solution. Here vertex 1 has in-degree
0.

So, solution is: 1 -> (not yet completed )

Step 3: Decrease in-degree count of vertices who are adjacent to the vertex which recently

added to the solution. Here vertex 1 is recently added to the solution. Vertices 2 and 3 are
adjacent to vertex 1. So decrease the in-degree count of those and update.
Updated result is:

257
Vertex in-degree
1 Already added to solution

2 0

3 0

4 2

Step 4: Again repeat the same thing which we have done in step1 that is, write the vertices
which have in-degree 0 in solution. Here we can observe that two vertices (2 and 3) have in-
degree 0 (zero). Add any one vertex into the solution.
Note that, you may add vertex 2 into solution, and I may add vertex 3 to solution. That
means the solution to topological sorting is not unique. Now add vertex 3.
Solution is: 1->3->

Step 5: Again decrease the in-degree count of vertices which are adjacent to vertex 3.Updated
result is:
Vertex in-degree
1 Already added to solution

2 0

3 Already added to solution

4 1

Now add vertex 2 to solution because it only has in-degree 0.

Solution is: 1->3->2->

Updated result is:

Vertex in-degree

258
1 Already added to solution

2 Already added to solution

3 Already added to solution

4 0

Finally add 4 to solution.

Final solution is: 1->3->2->4 Program for Topological Sort in C


#include <stdio.h>
int main(){

int i,j,k,n,a[10][10],indeg[10],flag[10],count=0;
printf("Enter the no of vertices:\n");
scanf("%d",&n);
printf("Enter the adjacency matrix:\n");
for(i=0;i<n;i++){
printf("Enter row %d\n",i+1);
for(j=0;j<n;j++) scanf("%d",&a[i]
[j]);
}
for(i=0;i<n;i++){

indeg[i]=0;flag[i]=0;
}

for(i=0;i<n;i++) for(j=0;j<n;j++)
indeg[i]=indeg[i]+a[j][i]; printf("\
nThe topological order is:");
while(count<n){ for(k=0;k<n;k++){
if((indeg[k]==0) && (flag[k]==0)){

printf("%d ",(k+1));

flag [k]=1;
}

259
for(i=0;i<n;i++){if(a[i][k]==1)
indeg[k]--;

count++;

return 0;

Output

Enter the no of vertices:4

Enter the adjacency matrix:Enter row 1


0110

Enter row 2

0001

Enter row 3

0001

Enter row 4

0000

The topological order is : 1 2 3 4

4.3.1 DIJKSTRA'S ALGORITHM

Introduction :

Ever wondered how does Google Maps finds the shortest and fastest route between two
places?

260
Well, the answer is Dijkstra's Algorithm. Dijkstra's Algorithm is a Graph
algorithm that finds the shortest path from a source vertex to all other vertices in the Graph
(single source shortest path). It is a type of Greedy Algorithm that only works on Weighted
Graphs having positive weights. The time complexity of Dijkstra's Algorithm is O(V2) with the
help of the adjacency matrix representation of the graph. This time complexity can be reduced
to O((V + E) log V) with the help of an adjacency list representation of the graph, where V is
the number of vertices and E is the number of edges in the graph.

History of Dijkstra's Algorithm:

Dijkstra's Algorithm was designed and published by Dr. Edsger W. Dijkstra, a Dutch
Computer Scientist, Software Engineer, Programmer, Science Essayist, and Systems
Scientist.

During an Interview with Philip L. Frana for the Communications of the ACM journal in
the year 2001, Dr. Edsger W. Dijkstra revealed:
"What is the shortest way to travel from Rotterdam to Groningen, in general: from given
city to given city? It is the algorithm for the shortest path, which I designed in about twenty
minutes. One morning I was shopping in Amsterdam with my young fiancée, and tired, we sat
down on the café terrace to drink a cup of coffee and I was just thinking about whether I could
do this, and I then designed the algorithm for the shortest path. As I said, it was a twenty-
minute invention. In fact, it was published in '59, three years later. The publication is still
readable, it is, in fact, quite nice. One of the reasons that it is so nice was that I designed it
without pencil and paper. I learned later that one of the advantages of designing without pencil
and paper is that you are almost forced to avoid all avoidable complexities. Eventually, that
algorithm became to my great amazement, one of the cornerstones of my fame."

Dijkstra thought about the shortest path problem while working as a programmer at the
Mathematical Centre in Amsterdam in 1956 to illustrate the capabilities of a new computer
known as ARMAC. His goal was to select both a problem and a solution (produced by the
computer) that people with no computer background could comprehend. He developed the
shortest path algorithm and later executed it for ARMAC for a vaguely shortened transportation
map of 64 cities in the Netherlands (64 cities, so 6 bits would be sufficient to encode the city

261
number). A year later, he came across another issue from hardware engineers operating the
next computer of the institute: Minimize the amount of wire required to connect the pins on the
machine's back panel. As a solution, he re-discovered the algorithm called Prim's minimal
spanning tree algorithm and published it in the year 1959.

Fundamentals of Dijkstra's Algorithm

The following are the basic concepts of Dijkstra's Algorithm:

1. Dijkstra's Algorithm begins at the node we select (the source node), and it examines the
graph to find the shortest path between that node and all the other nodes in the graph.
2. The Algorithm keeps records of the presently acknowledged shortest distance from each
node to the source node, and it updates these values if it finds any shorter path.

3. Once the Algorithm has retrieved the shortest path between the source and another
node, that node is marked as 'visited' and included in the path.

4. The procedure continues until all the nodes in the graph have been included in the path.
In this manner, we have a path connecting the source node to all other nodes, following
the shortest possible path to reach each node.

Understanding the Working of Dijkstra's Algorithm:

A graph and source vertex are requirements for Dijkstra's Algorithm. This Algorithm is
established on Greedy Approach and thus finds the locally optimal choice (local minima in this
case) at each step of the Algorithm.

Each Vertex in this Algorithm will have two properties defined for it:

1. Visited Property

2. Path Property

Let us understand these properties in brief.

Visited Property:

1. The 'visited' property signifies whether or not the node has been visited.

262
2. We are using this property so that we do not revisit any node.

3. A node is marked visited only when the shortest path has been found.

Path Property:

1. The 'path' property stores the value of the current minimum path to the node.

2. The current minimum path implies the shortest way we have reached this node till now.
3. This property is revised when any neighbor of the node is visited.

4. This property is significant because it will store the final answer for each node.

Initially, we mark all the vertices, or nodes, unvisited as they have yet to be visited. The path to all
the nodes is also set to infinity apart from the source node. Moreover, the path to the source node is
set to zero (0).

We then select the source node and mark it as visited. After that, we access all the
neighboring nodes of the source node and perform relaxation on every node. Relaxation is the
process of lowering the cost of reaching a node with the help of another node.

In the process of relaxation, the path of each node is revised to the minimum value amongst the
node's current path, the sum of the path to the previous node, and the path from the previous
node to the current node.

Let us suppose that p[n] is the value of the current path for node n, p[m] is the value of the path
up to the previously visited node m, and w is the weight of the edge between the current node
and previously visited one (edge weight between n and m).

In the mathematical sense, relaxation can be exemplified as:

p[n] = minimum(p[n], p[m] + w)

We then mark an unvisited node with the least path as visited in every subsequent step and
update its neighbor's paths.

We repeat this procedure until all the nodes in the graph are marked visited.

263
Whenever we add a node to the visited set, the path to all its neighboring nodes also changes
accordingly.

If any node is left unreachable (disconnected component), its path remains 'infinity'. In case
the source itself is a separate component, then the path to all other nodes remains 'infinity'.

Understanding Dijkstra's Algorithm with an Example:

The following is the step that we will follow to implement Dijkstra's Algorithm:

Step 1: First, we will mark the source node with a current distance of 0 and set the rest of the
nodes to INFINITY.

Step 2: We will then set the unvisited node with the smallest current distance as the current
node, suppose X.

Step 3: For each neighbor N of the current node X: We will then add the current distance of X
with the weight of the edge joining X-N. If it is smaller than the current distance of N, set it as
the new current distance of N.

Step 4: We will then mark the current node X as visited.

Step 5: We will repeat the process from 'Step 2' if there is any node unvisited left in the graph.

Let us now understand the implementation of the algorithm with the help of an example:

Fig.4.5.1 The Given Graph

264
1. We will use the above graph as the input, with node A as the source.
2. First, we will mark all the nodes as unvisited.
3. We will set the path to 0 at node A and INFINITY for all the other nodes.
4. We will now mark source node A as visited and access its neighboring nodes.
Note: We have only accessed the neighboring nodes, not visited them.
5. We will now update the path to node B by 4 with the help of relaxation because the path
to node A is 0 and the path from node A to B is 4, and the minimum((0 + 4),
INFINITY) is 4.
6. We will also update the path to node C by 5 with the help of relaxation because the path
to node A is 0 and the path from node A to C is 5, and the minimum((0 + 5),
INFINITY) is 5. Both the neighbors of node A are now relaxed; therefore, we can move
ahead.
7. We will now select the next unvisited node with the least path and visit it. Hence, we will
visit node B and perform relaxation on its unvisited neighbors. After performing
relaxation, the path to node C will remain 5, whereas the path to node E will become 11,
and the path to node D will become 13.
8. We will now visit node E and perform relaxation on its neighboring nodes B, D, and F.
Since only node F is unvisited, it will be relaxed. Thus, the path to node B will remain as
it is, i.e., 4, the path to node D will also remain 13, and the path to node F will become 14
(8 + 6).
9. Now we will visit node D, and only node F will be relaxed. However, the path to
node F will remain unchanged, i.e., 14.
10. Since only node F is remaining, we will visit it but not perform any relaxation as all its
neighboring nodes are already visited.
11. Once all the nodes of the graphs are visited, the program will end.

Hence, the final paths we concluded are:

1. A = 0
2. B = 4 (A -> B)
3. C = 5 (A -> C)
4. D = 4 + 9 = 13 (A -> B -> D)
5. E = 5 + 3 = 8 (A -> C -> E)

265
6. F = 5 + 3 + 6 = 14 (A -> C -> E -> F)

Pseudocode for Dijkstra's Algorithm

We will now understand a pseudocode for Dijkstra's Algorithm.

 We have to maintain a record of the path distance of every node. Therefore, we can store
the path distance of each node in an array of size n, where n is the total number of nodes.
 Moreover, we want to retrieve the shortest path along with the length of that path. To
overcome this problem, we will map each node to the node that last updated its path
length.
 Once the algorithm is complete, we can backtrack the destination node to the source node
to retrieve the path.
 We can use a minimum Priority Queue to retrieve the node with the least path distance in
an efficient way.

Let us now implement a pseudocode of the above illustration:

Pseudocode:

function Dijkstra_Algorithm(Graph, source_node)

// iterating through the nodes in Graph and set their distances to INFINITY

for each node N in Graph:

distance[N] = INFINITY

previous[N] = NULL

If N != source_node, add N to Priority Queue G

// setting the distance of the source node of the Graph to 0

distance[source_node] = 0

// iterating until the Priority Queue G is not empty

while G is NOT empty:

// selecting a node Q having the least distance and marking it as visited

266
Q = node in G with the least distance[]

mark Q visited

// iterating through the unvisited neighboring nodes of the node Q and performing r

elaxation accordingly

for each unvisited neighbor node N of Q:

temporary_distance = distance[Q] + distance_between(Q, N)

// if the temporary distance is less than the given distance of the path to the Node, up

dating the resultant distance with the minimum value

if temporary_distance < distance[N]

distance[N] := temporary_distance

previous[N] := Q

// returning the final list of distance


return distance[], previous[]

Code for Dijkstra's Algorithm in C

The following is the implementation of Dijkstra's Algorithm in the C Programming Language:

// Implementation of Dijkstra's Algorithm in C


// importing the standard I/O header file
#include <stdio.h>
// defining some constants
#define INF 9999
#define MAX 10
// prototyping of the function
void DijkstraAlgorithm(int Graph[MAX][MAX], int size, int start);
// defining the function for Dijkstra's Algorithm
void DijkstraAlgorithm(int Graph[MAX][MAX], int size, int start) {
int cost[MAX][MAX], distance[MAX], previous[MAX];

267
int visited_nodes[MAX], counter, minimum_distance, next_node, i, j;
// creating cost matrix
for (i = 0; i < size; i++)
for (j = 0; j < size; j++)
if (Graph[i][j] == 0)
cost[i][j] = INF;
else
cost[i][j] = Graph[i][j];
for (i = 0; i < size; i++)
{ distance[i] = cost[start]
[i]; previous[i] = start;
visited_nodes[i] = 0;
}
distance[start] = 0;
visited_nodes[start] = 1;
counter = 1;
while (counter < size - 1)
{ minimum_distance = INF;
for (i = 0; i < size; i++)
if (distance[i] < minimum_distance && !visited_nodes[i])
{ minimum_distance = distance[i];
next_node = i;
}
visited_nodes[next_node] = 1;
for (i = 0; i < size; i++)
if (!visited_nodes[i])
if (minimum_distance + cost[next_node][i] < distance[i])
{ distance[i] = minimum_distance + cost[next_node][i];
previous[i] = next_node;
}
counter++;

}
// printing the distance

268
for (i = 0; i < size; i++)
if (i != start) {
printf("\nDistance from the Source Node to %d: %d", i, distance[i]);
}
}
// main function
int main() {
// defining variables
int Graph[MAX][MAX], i, j, size, source;
// declaring the size of the matrix
size = 7;

// declaring the nodes of graph


Graph[0][0] = 0;
Graph[0][1] = 4;
Graph[0][2] = 0;
Graph[0][3] = 0;
Graph[0][4] = 0;
Graph[0][5] = 8;
Graph[0][6] = 0;
Graph[1][0] = 4;
Graph[1][1] = 0;
Graph[1][2] = 8;
Graph[1][3] = 0;
Graph[1][4] = 0;
Graph[1][5] = 11;
Graph[1][6] = 0;

Graph[2][0] = 0;
Graph[2][1] = 8;
Graph[2][2] = 0;
Graph[2][3] = 7;
Graph[2][4] = 0;
Graph[2][5] = 4;

269
Graph[2][6] = 0;

Graph[3][0] = 0;
Graph[3][1] = 0;
Graph[3][2] = 7;
Graph[3][3] = 0;
Graph[3][4] = 9;
Graph[3][5] = 14;
Graph[3][6] = 0;

Graph[4][0] = 0;
Graph[4][1] = 0;
Graph[4][2] = 0;
Graph[4][3] = 9;
Graph[4][4] = 0;
Graph[4][5] = 10;
Graph[4][6] = 2;

Graph[5][0] = 0;
Graph[5][1] = 0;
Graph[5][2] = 4;
Graph[5][3] = 14;
Graph[5][4] = 10;
Graph[5][5] = 0;
Graph[5][6] = 2;
Graph[6][0] = 0;
Graph[6][1] = 0;
Graph[6][2] = 0;
Graph[6][3] = 0;
Graph[6][4] = 2;
Graph[6][5] = 0;
Graph[6][6] = 1;

270
source = 0;
// calling the DijkstraAlgorithm() function by passing the Graph, the number of nodes and the
source node
DijkstraAlgorithm(Graph, size, source);

return 0;
}

Output

Distance from the Source Node to 1: 4


Distance from the Source Node to 2: 12
Distance from the Source Node to 3: 19
Distance from the Source Node to 4: 12
Distance from the Source Node to 5: 8
Distance from the Source Node to 6: 10

4.3.2 MINIMUM SPANNING TREE

Before knowing about the minimum spanning tree, we should know about the spanning tree.

To understand the concept of spanning tree, consider the below graph:

The above graph can be represented as G(V, E), where 'V' is the number of vertices, and 'E' is
the number of edges. The spanning tree of the above graph wouldbe represented as G`(V`,
E`). In this case, V` = V means that the number of vertices in the sp a
nning tree would be the
same as the number of vertices in the graph, but the number of edges would be different. The
number of edges in the spanning tree is the subset of the number of edges in the original
graph. Therefore, the number of edges can be written as:

271
E` € E

It can also be written as:

E` = |V| - 1

Two conditions exist in the spanning tree, which is as follows:

 The number of vertices in the spanning tree would be the same as the number of
vertices in
The original graph.
V` = V

 The number of edg e


s in the spanning tree would be equal to the number of edges
minus
1.
E` = |V| - 1
 The spanning tree s h
ould not contain any cycle.
 The spanning tree s h
ould not be disconnected.

Note: A graph can have more than one spanning tree.

Consider the below graph:

The above graph contains 5 vertices. As we know, the vertices in the spanning tree would be
the same as the graph; therefore, V` is equal 5. The number of ed ges in the spanning tree
would be equal to (5 - 1), i.e., 4. The following are the possible spanning trees:

272
What is a minimum spanning tree?

The minimum spanning tree is a spanning tree whose sum of the edges is minimum. Consider
the below graph that contains the edge weight:

The following are the spanning trees that we can make from the above graph.

 The first spanning tree is a tree in which we have removed the edge between the
vertices 1
and 5 shown as below:
The sum of the edges of the above tree is (1 + 4 + 5 + 2): 1 2

 The second spanning tree is a tree in which we have removed the edge between the

273
vertices 1 and 2 shown as below:
The sum of the edges of the above tree is (3 + 2 + 5 + 4) : 14

 The third spanning tree is a tree in which we have removed the edge between the
vertices 2
and 3 shown as below:
The sum of the edges of the above tree is (1 + 3 + 2 + 5) : 11

 The fourth spanning tree is a tree in which we have removed the edge between the
vertices

3 and 4 shown as below:


The sum of the edges of the above tree is (1 + 3 + 2 + 4) : 10. The edge cost 10 is minimum
so it is a minimum spanning tree.

 General properties of minimum spanning tree:


 If we remove any edge from the spanning tree, then it becomes disconnected.
Therefore, we cannot remove any edge from the spanning tree.
 If we add an edge to the spanning tree then it creates a loop. Therefore, we cannot
add any edge to the spanning tree.
 In a graph, each edge has a distinct weight, then there exists only a single and
unique minimum spanning tree. If the edge weight is not distinct, then there can be
more than one

minimum spanning tree.


 A complete undirected graph can have an nn-2 number of spanning trees.
 Every connected and undirected graph contains atleast one spanning tree.
 The disconnected graph does not have any spanning tree.
 In a complete graph, we can remove maximum (e-n+1) edges to construct a
spanning tree.

Let's understand the last property through an example.

Consider the complete graph which is given below:

274
The number of spanning trees that can be made from the above complete graph equals to nn-
2
= 44-2 = 16.

Therefore, 16 spanning trees can be created from the above graph.

The maximum number of edges that can be removed to construct a spanning tree equals to e-
n+1 = 6 - 4 + 1 = 3.

4.3.3 PRIM'S ALGORITHM

In this article, we will discuss the prim's algorithm. Along with t he algorithm, we will also
see the complexity, working, example, and implementation of prim's algorithm.

Before starting the main topic, we should discuss the basic and important terms such as
spanning tree and minimum spanning tree.

Spanning tree - A spanning tree is the subgraph of an undirected connected graph.

Minimum Spanning tree - Minimum spanning tree can be defined as the spanning tree in
which the sum of the weights of the edge is minimum. The weight of the spanning tree is the
sum of the weights given to the edges of the spanning tree.

Now, let's start the main topic.

Prim's Algorithm is a greed yalgorithm that is used to find the minimum spanning tree from a
graph. Prim's algorithm finds the subset of edges that includes every vertex of the graph such
that the sum of the weights of the edges can be minimized.

275
Prim's algorithm starts with the single node and explores all the adjacent nodes with all the
connecting edges at every step. The edges with the minimal weights causing no cycles in the
graph got selected.

How does the prim's algorithm work?

Prim's algorithm is a greedy algorithm that starts from one vertex and continue to add the
edges with the smallest wei ght until the goal is reached. The steps to implement the prim's
algorithm are given as follows -

 First, we have to initialize an MST with the randomly chosen vertex.

 Now, we have to find all the edges that connect the tree in the above step with the
new vertices. From the edges found, select the minimum edge and add it to the tree.

 Repeat step 2 until the minimum spanning tree is formed.


 The applications of prim's algorithm are -

 Prim's algorithm can be used in network designing.

 It can be used to make network cycles.

 It can also be used to lay down electrical wiring cables.

Example :

Now, let's see the working of prim's algorithm using an example. It will be easier to understand
the prim's algorithm using an example.

Suppose, a weighted graph is -

Step 1 - First, we have to choose a vertex from the above graph. Let's choose B.

276
Step 2 - Now, we have to choose and add the shortest edge from vertex B. There are two
edges from vertex B that are B to C with weight 10 and edge B to D with weight 4. Among the
edges, the edge BD has the minimum weight. So, add it to the MST.

Step 3 - Now, again, choose the edge with the minimum weight amo ng all the other edges. In
this case, the edges DE and CD are such edges. Add them to MST and explore the adjacent
of C, i.e., E and A. So, select the edge DE and add it to the MST.

Step 4 - Now, select the edge CD, and add it to the MST.

Step 5 - Now, choose the edge CA. Here, we cannot select the edge CE as it would create a
cycle to the graph. So, choose the edge CA and add it to the MST.

277
So, the graph produced in step 5 is the minimum spanning tree of the given graph. The cost of
the MST is given below -

Cost of MST = 4 + 2 + 1 + 3 = 10 units.

Algorithm:

1. Step 1: Select a starting vertex


2. Step 2: Repeat Steps 3 and 4 until there are fringe vertices
3. Step 3: Select an edge 'e' connecting the tree vertex and fringe vertex that has minimum
weight
4. Step 4: Add the selected edge and the vertex to the minimum spanning tree T
5. [END OF LOOP]
6. Step 5: EXIT

Complexity :

Now, let's see the time complexity of Prim's algorithm. The running ti me of the prim's algorithm
depends upon using the dat a
structure for the graph and the ordering of edges. Below table
shows some choices -

 Time Complexity

Data structure used for the minimum edge weight Time Complexity

Adjacency matrix, linear searching O(|V|2)


Adjacency list and binary heap O(|E| log |V|)
Adjacency list and Fibonacci heap O(|E|+ |V| log |V|)

278
Prim's algorithm can be simply implemented by using the adjacency matrix or adjacency list
graph representation, and to add the edge with the minimum weight requires the linearly
searching of an array of weights. It requires O(|V|2) running time. It can be improved further by
using the implementation of heap to find the minimum weight edges in the inner loop of the
algorithm.

The time complexity of the prim's algorithm is O(E logV) or O(V logV), where E is the no. of
edges, and V is the no. of vertices.

Implementation:

Now, let's see the implementation of prim's algorithm.

Program: Write a program to implement prim's algorithm in C language.

#include <stdio.h>
#include <limits.h>
#define vertices 5 /*Define the number of vertices in the graph*/
/* create minimum_key() method for finding the vertex that has minimum key-
value and that is not added in MST yet */
int minimum_key(int k[], int mst[])
{
int minimum = INT_MAX, min,i;
/*iterate over all vertices to find the vertex with minimum key-value*/
for (i = 0; i < vertices; i++)
if (mst[i] == 0 && k[i] < minimum )
minimum = k[i], min = i;
return min;
}
/* create prim() method for constructing and printing the MST.
The g[vertices][vertices] is an adjacency matrix that defines the graph for MST.*/
void prim(int g[vertices][vertices])
{
/* create array of size equal to total number of vertices for storing the MST*/
int parent[vertices];

279
/* create k[vertices] array for selecting an edge having minimum weight*/
int k[vertices];
int mst[vertices];
int i, count,edge,v; /*Here 'v' is the vertex*/
for (i = 0; i < vertices; i++)
{
k[i] = INT_MAX;
mst[i] = 0;
}
k[0] = 0; /*It select as first vertex*/
parent[0] = -1; /* set first value of parent[] array to -1 to make it root of MST*/
for (count = 0; count < vertices-1; count++)
{
/*select the vertex having minimum key and that is not added in the MST yet from the set
of vertices*/
edge = minimum_key(k, mst);
mst[edge] = 1;
for (v = 0; v < vertices; v++)
{
if (g[edge][v] && mst[v] == 0 && g[edge][v] < k[v])
{
parent[v] = edge, k[v] = g[edge][v];
}
}
}
/*Print the constructed Minimum spanning tree*/
printf("\n Edge \t Weight\n");
for (i = 1; i < vertices; i++)
printf(" %d <-> %d %d \n", parent[i], i, g[i][parent[i]]);
}
int main()
{
int g[vertices][vertices] = {{0, 0, 3, 0, 0},
{0, 0, 10, 4, 0},

280
{3, 10, 0, 2, 6},
{0, 4, 2, 0, 1},
{0, 0, 6, 1, 0},
};
prim(g);
return 0;
}

Output

4.3.4 KRUSKAL'S ALGORITHM

In this article, we will discuss Kruskal's algorithm. Here, we will also see the complexity,
working, example, and implementation of the Kruskal's algorithm.

But before moving directly towards the algorithm, we should first understand the basic terms
such as spanning tree and minimum spanning tree.

Spanning tree - A spanning tree is the subgraph of an undirected connected graph.

Minimum Spanning tree - Minimum spanning tree can be defined as the spanning tree in
which the sum of the weights of the edge is minimum. The weight of the spanning tree is the
sum of the weights given to the edges of the spanning tree.

Kruskal's Algorithm is used to find the minimum spanning tree for a connected weighted
graph. The main target of the algorithm is to find the subset of edges by using which we can
traverse every vertex of the graph. It follows the greedy approach that finds an optimum
solution at every stage instead of focusing on a global optimum.

How does Kruskal's algorithm work?

281
In Kruskal's algorithm, we start from edges with the lowest weight and keep adding the edges
until the goal is reached. The steps to implement Kruskal's algorithm are listed as follows -

 First, sort all the edges from low weight to high.

 Now, take the edge with t he lowest weight and add it to the spanning tree. If the edge to be
added creates a cycle, then reject the edge.

 Continue to add the edges until we reach all vertices, and a minimum spanning tree is
created.
 The applications of Krusk al's algorithm are -

 Kruskal's algorithm can be used to layout electrical wiring among c ties.

 It can be used to lay down LAN connections.

Example :

Now, let's see the working of Kruskal's algorithm using an example. It will be easier to
understand Kruskal's algorithm using an example.

Suppose a weighted graph is -

The weight of the edges of the above graph is given in the below table -

Edge AB AC AD AE BC CD DE

Weight 1 7 10 5 3 4 2

Now, sort the edges given above in the ascending order of their weights.

282
Edge AB DE BC CD AE AC AD

Weight 1 2 3 4 5 7 10

Now, let's start constructing t he minimum spanning tree.

Step 1 - First, add the edge AB with weight 1 to the MST.

Step 2 - Add the edge DE with weight 2 to the MST as it is not creating the cycle.

Step 3 - Add the edge BC with weight 3 to the MST, as it is not creating any cycle or loop.

Step 4 - Now, pick the edge CD with weight 4 to the MST, as it is not forming the cycle.

283
Step 5 - After that, pick the edge AE with weight 5. Including this edg e will create the cycle, so
discard it.

Step 6 - Pick the edge AC with weight 7. Including this edge will create the cycle, so discard it.

Step 7 - Pick the edge AD with weight 10. Including this edge will also create the cycle, so
discard it.

So, the final minimum spa nning tree obtained from the given weighted graph by using
Kruskal's algorithm is -

The cost of the MST is = AB + DE + BC + CD = 1 + 2 + 3 + 4 = 10.

Now, the number of edges i n the above tree equals the number of vertices minus 1. So, the
algorithm stops here.

Algorithm:
1. Step 1: Create a forest F in such a way that every vertex of the graph is a separate tree.
2. Step 2: Create a set E that contains all the edges of the graph.

3. Step 3: Repeat Steps 4 and 5 while E is NOT EMPTY and F is not spanning
4. Step 4: Remove an edge from E with minimum weight
5. Step 5: IF the edge obtained in Step 4 connects two different trees, then add it to the fore
st F

284
6. (for combining two trees into one tree).
7. ELSE
8. Discard the edge
9. Step 6: END

Complexity :

Now, let's see the time complexity of Kruskal's algorithm.

 Time Complexity
The time complexity of Kruskal's algorithm is O(E logE) or O(V logV), where E is the no. of edges,
and V is the no. of vertices.

Implementation :

Now, let's see the implementation of kruskal's algorithm.

Program: Write a program to implement kruskal's algorithm in C++.

#include <iostream> #include


<algorithm> using
namespace std; const int
MAX = 1e4 + 5; int
id[MAX], nodes, edges;
pair <long long, pair<int, int> > p[MAX];
void init()
{
for(int i = 0;i < MAX;++i)
id[i] = i;
}
int root(int x)

285
{
while(id[x] != x)
{
id[x] =
id[id[x]]; x =
id[x];
}
return x;
}

void union1(int x, int y)


{
int p = root(x);
int q = root(y);
id[p] = id[q];
}
long long kruskal(pair<long long, pair<int, int> > p[])
{
int x, y;
long long cost, minimumCost = 0;
for(int i = 0;i < edges;++i)
{
x = p[i].second.first;
y = p[i].second.second;
cost = p[i].first;
if(root(x) != root(y))
{
minimumCost += cost;

286
union1(x, y);
}
}
return minimumCost;
}
int main()
{
int x, y;
long long weight, cost, minimumCost;
init();
cout <<"Enter Nodes and edges";
cin >> nodes >> edges;
for(int i = 0;i < edges;++i)

{
cout<<"Enter the value of X, Y and edges";
cin >> x >> y >> weight;
p[i] = make_pair(weight, make_pair(x, y));
}
sort(p, p + edges);
minimumCost = kruskal(p);
cout <<"Minimum cost is "<< minimumCost << endl;
return 0;
}

287
Output

4.3.5 APPLICATIONS OF GRAPHS

Graphs are nothing but connected nodes(vertex). So any network related, routing,
finding relation, path etc related real life applications use graphs.
Since they are powerful abstractions, graphs can be very important in modeling data. In
fact, many problems can be reduced to known graph problems. Here we outline just some of
the manyapplications of graphs.
1. Social network graphs: to tweet or not to tweet. Graphs that represent who knows whom,
who communicates with whom, who influences whom or other relationships in social
structures. An example is the twitter graph of who follows whom. These can be used to
determine how information flows, how topics become hot, how communities develop, or
even who might be a good match for who, or is that whom.
2. Transportation networks. In road networks vertices are intersections and edges are
the road segments between them, and for public transportation networks vertices are
stops and edges are the links between them. Such networks are used by many map
programs such as Google maps, Bing maps and now Apple IOS 6 maps (well perhaps
without the public transport) to find the best routes between locations. They are also used
for studying traffic patterns, traffic light timings, and many aspects of transportation.

3. Utility graphs. The power grid, the Internet, and the water network are all examples of
graphs where vertices represent connection points, and edges the wires or pipes
between them. Analyzing properties of these graphs is very important in understanding
the reliability of such utilities under failure or attack, or in minimizing the costs to build
infrastructure that matches required demands.

288
4. Document link graphs. The best known example is the link graph of the web, where each
web page is a vertex, and each hyperlink a directed edge. Link graphs are used, for
example, to analyze relevance of web pages, the best sources of information, and good
link sites.
5. Protein-protein interactions graphs. Vertices represent proteins and edges represent
interactions between them that carry out some biological function in the cell. These
graphs can be used, for example, to study molecular pathways—chains of molecular
interactions in a cellular process. Humans have over 120K proteins with millions of
interactions among them.
6. Network packet traffic graphs. Vertices are IP (Internet protocol) addresses and edges
are the packets that flow between them. Such graphs are used for analyzing network
security, studying the spread of worms, and tracking criminal or non-criminal activity.
7. Scene graphs. In graphics and computer games scene graphs represent the logical or
spacial relationships between objects in a scene. Such graphs are very important in the
computer games industry.
8. Finite element meshes. In engineering many simulations of physical systems, such as the
flow of air over a car or airplane wing, the spread of earthquakes through the ground, or
the structural vibrations of a building, involve partitioning space into discrete elements.
The elements along with the connections between adjacent elements forms a graph that
is called a finite element mesh.
9. Robot planning. Vertices represent states the robot can be in and the edges the possible
transitions between the states. This requires approximating continuous motion as a
sequence of discrete steps. Such graph plans are used, for example, in planning paths
for autonomous vehicles.
10. Neural networks. Vertices represent neurons and edges the synapses between them.
Neural networks are used to understand how our brain works and how connections
change when we learn. The human brain has about 1011 neurons and close to 1015
synapses.

289
QUESTION BANK
PART A
1. What is the primary purpose of a B-Tree?

a) To store data in a linear format


b) To maintain sorted data and allow searches, sequential access, insertions, and
deletions in logarithmic time
c) To handle only small datasets
d) To facilitate binary search only

2. In a B+ Tree, where are all the values stored?

a) In the internal nodes only


b) In both internal and leaf nodes
c) In the leaf nodes only
d) In the root node only

3. A graph is defined as a set of .

a) Nodes
b) Nodes and edges
c) Edges only
d) Trees

4. Which of the following is a common way to represent a graph?

a) Adjacency list
b) Adjacency matrix
c) Incidence matrix
d) All of the above

5. A graph with no cycles is called a .

a) Cyclic graph
b) Acyclic graph
c) Directed graph

290
d) Weighted graph

6. Breadth-first traversal of a graph is also known as .

a) Level-order traversal
b) Depth-order traversal

c) In-order traversal
d) Pre-order traversal
7. Depth-first traversal of a graph can be implemented using a .

a) Queue
b) Stack
c) Priority queue
d) Deque
8. Topological sort is applicable only to .

a) Directed graphs
b) Undirected graphs
c) Cyclic graphs
d) Connected graphs
9. Dijkstra’s algorithm is used to find the in a graph.

a) Minimum spanning tree


b) Shortest path from a single source
c) Longest path
d) All-pairs shortest path
10. Which of the following algorithms can be used to find a minimum spanning tree?

a) Dijkstra’s algorithm
b) Prim’s algorithm
c) Kruskal’s algorithm
d) Both b and c
11. Prim’s algorithm is a .

a) Greedy algorithm

291
b) Dynamic programming algorithm
c) Divide and conquer algorithm
d) Backtracking algorithm
12. Kruskal’s algorithm works by .

a) Expanding the shortest paths


b) Adding edges in increasing order of weight
c) Adding edges in decreasing order of weight
d) Using a stack to explore nodes
13. A complete graph is a graph in which .

a) Every node is connected to exactly one other node

b) Every node is connected to all other nodes


c) There are no edges
d) Every node is a root node

14. Breadth-first traversal is particularly useful for finding in an unweighted


graph.

a) The longest path


b) The shortest path
c) Cycles
d) All paths

15. In depth-first traversal, if we reach a dead end, we .

a) Continue exploring nodes


b) Backtrack to the previous node
c) Stop the traversal
d) Mark the node as the end

16. A B-tree of order mmm is a balanced search tree in which every node has at most mmm
children and at least children (except for the root).

Answer: ⌈m/2⌉

292
17. In a B+ tree, all records are stored at the level, while internal nodes store
only keys.

Answer: leaf

18. A graph GGG is defined as a pair (V,E)(V, E)(V,E), where VVV is a set of vertices and
EEE is a set of .

Answer: edges

19. The adjacency list and matrix are two common ways to represent a graph.

Answer: adjacency

20. A graph with no cycles is called an graph.

Answer: acyclic

21. In breadth-first traversal of a graph, vertices are visited level by level using a
data structure.

Answer: queue

PART B

1. What is a graph and its types?


2. Consider the graph given below. Create the adjacency matrix of it

3. Given the following adjacency matrix, draw the weighted graph.


4. Give the purpose of Dijikstra’s algorithm.
5. Differentiate cyclic and acyclic graph
6. Classify strongly connected and weakly connected graph.

293
7. How to find all articulation points in a given graph?
8. Define the length of the graph.
9. Define minimum spanning tree. Give an example
10. State the principle of Topological sorting.
11. Explain procedure for Depth first search algorithm.
12. Prove that the number of edges in a complete graph of n vertices inn(n-1)/2
13. In a complete graph with n vertices, show that the number of spanning trees is at least
2 n-1 -1
14. Give two applications of graphs.
15. Define minimum spanning tree. Give an example. (Nov/Dec 2023)
16. Show the ways in which a graph can be represented with an example. (Nov/Dec
2023)
17. How do you represent a graph in data structure? (Nov/Dec 2024)

PART C

1. Describe in detail about the following representations of a graph. i. Adjacency Matrix ii.
Adjacency List
2. i. Consider the given directed acyclic graph D. Sort the nodes D by applying
topological sort on ‘D

ii. Consider the graph given below and show its adjacency list in the memory

294
3. Differentiate depth-first search and breadth-first search traversal of a graph with
suitable examples.
4. i. Illustrate Kruskal’s algorithm to find the minimum spanning tree of a graph.
ii. Trace the algorithm for the following graph

5. Consider five cities: (1) New Delhi, (2) Mumbai, (3) Chennai, (4) Bangalore, and (5)
Kolkata, and a list of flights that connect these cities as shown in the following table.
Use the given information to construct a graph.

6. i. Given a rooted tree, one desires to find the shortest path from the root to a given

node v. Which algorithm would one use to find this shortest path?

ii. Write a program to determine whether there is at least one path from the source to

the destination.

7. Apply Kruskal’s algorithm and determine minimum spanning tree with an example.

(Nov/Dec 2023)

8. Illustratean algorithm to perform a breadth first traversal with an example. (Nov/Dec

2023)

295
9. Apply Dijkstra’s algorithm for the following graph to find shortest path from source

node A. (Nov/Dec 2024)

10. Make use of the Kruskal’s algorithm to find the minimum cost spanning tree.Trace the

algorithm for the following graph. (Nov/Dec 2024)

296
UNIT V
ALGORITHM ANALYSIS
Time and space complexity - Asymptotic Notations and its properties Best case, Worst
case and average case analysis – Recurrence relation: substitution method –
Mathematical analysis for Recursive and Non-recursive algorithms

5.1 TIME AND SPACE COMPLEXITY

Generally, there is always more than one way to solve a problem in computer science
with different algorithms. Therefore, it is highly required to use a method to compare the
solutions in order to judge which one is more optimal. The method must be:
• Independent of the machine and its configuration, on which the algorithm is running on.
• Shows a direct correlation with the number of inputs.
• Can distinguish two algorithms clearly without ambiguity.
There are two such methods used,
• Time complexity and
• space complexity

Time Complexity:
The time complexity of an algorithm quantifies the amount of time taken by an algorithm
to run as a function of the length of the input.
Note that the time to run is a function of the length of the input and not the actual
execution time of the machine on which the algorithm is running on.

Definition:

1. The valid algorithm takes a finite amount of time for execution. The time required
by the algorithm to solve given problem is called time complexity of the
algorithm.
2. Time complexity is very useful measure in algorithm analysis.
3. It is the time needed for the completion of an algorithm. To estimate the time
complexity, we need to consider the cost of each fundamental instruction and
the number of times the instruction is executed.

297
Example 1: Addition of two scalar variables.

Algorithm ADD SCALAR(A, B)


//Description: Perform arithmetic addition of two numbers
//Input: Two scalar variables A and B
//Output: variable C, which holds the addition of A and B
C <- A + B
return C

The addition of two scalar numbers requires one addition operation. the time
complexity of this algorithm is constant, so T(n) = O(1) .
In order to calculate time complexity on an algorithm, it is assumed that a constant
time c is taken to execute one operation, and then the total operations for an input length
on N are calculated. Consider an example to understand the process of calculation:
Suppose a problem is to find whether a pair (X, Y) exists in an array, A
of N elements whose sum is Z. The simplest idea is to consider every pair and check if it
satisfies the given condition or not.

The pseudo-code is as follows:

int a[n];
for(int i = 0;i < n;i++)
cin >> a[i]

for(int i = 0;i < n;i++)


for(int j = 0;j < n;j++) if(i!
=j && a[i]+a[j] == z)
return true
return false

298
Below is the implementation of the above approach:

// C++ program for the above approach

#include <bits/stdc++.h>

using namespace std;

// Function to find a pair in the given

// array whose sum is equal to z

bool findPair(int a[], int n, int z)

{
// Iterate through all the pairs

for (int i = 0; i < n; i++)

for (int j = 0; j < n; j++)

// Check if the sum of the pair

// (a[i], a[j]) is equal to z

if (i != j && a[i] + a[j] == z)

return true;
return
false;

299
}
// Driver Code

int main()

{
// Given Input

int a[] = { 1, -2, 1, 0, 5 };

int z = 0;

int n = sizeof(a) / sizeof(a[0]);

// Function Call

if (findPair(a, n, z))

cout << "True";

else

cout << "False";

return 0;}

Output:

False

Assuming that each of the operations in the computer takes approximately constant
time, let it be c. The number of lines of code executed actually depends on the value of Z.

300
During analyses of the algorithm, mostly the worst-case scenario is considered, i.e., when there
is no pair of elements with sum equals Z. In the worst case,

• N*c operations are required for input.


• The outer loop i loop runs N times.
• For each i, the inner loop j loop runs N times.

So total execution time is N*c + N*N*c + c. Now ignore the lower order terms since the
lower order terms are relatively insignificant for large input, therefore only the highest order term
is taken (without constant) which is N*N in this case. Different notations are used to describe
the limiting behavior of a function, but since the worst case is taken so big-O notation will be
used to represent the time complexity.
Hence, the time complexity is O(N2) for the above algorithm. Note that the time
complexity is solely based on the number of elements in array A i.e the input length, so if the
length of the array will increase the time of execution will also increase.
Order of growth is how the time of execution depends on the length of the input. In
the above example, it is clearly evident that the time of execution quadratically depends on the
length of the array. Order of growth will help to compute the running time with ease.
Another Example:
Let’s calculate the time complexity of the below algorithm:

count = 0

for (int i = N; i > 0; i /= 2)

for (int j = 0; j < i; j++)

count++;

This is a tricky case. In the first look, it seems like the complexity is O(N * log N). N
for the j′s loop and log(N) for i′s loop. But it’s wrong. Let’s see why.

301
Think about how many times count++ will run.

• When i = N, it will run N times.


• When i = N / 2, it will run N / 2 times.
• When i = N / 4, it will run N / 4 times.
• And so on.

The total number of times count++ will run is N + N/2 + N/4+…+1= 2 * N. So the time
complexity will be O(N).
Some general time complexities are listed below with the input range for which they are
accepted in competitive programming:

Input Worst Accepted Time


Complexity Usually type of solutions
Length

10 -12 O(N!) Recursion and backtracking

15-18 O(2N * N) Recursion, backtracking, and bit manipulation

18-22 O(2N * N) Recursion, backtracking, and bit manipulation

30-40 O(2N/2 * N) Meet in the middle, Divide and Conquer

100 O(N4) Dynamic programming, Constructive

400 O(N3) Dynamic programming, Constructive

Dynamic programming, Binary


2K O(N2* log N)
Search, Sorting,

302
Input Worst Accepted Time
Complexity Usually type of solutions
Length

Divide and Conquer

Dynamic programming, Graph, Trees,


10K O(N2)
Constructive

1M O(N* log N) Sorting, Binary Search, Divide and Conquer

Constructive, Mathematical, Greedy


100M O(N), O(log N), O(1)
Algorithms

5.1.2 SPACE COMPLEXITY:

Definition :

Problem-solving using computer requires memory to hold temporary data or final


result while the program is in execution. The amount of memory required by the algorithm to
solve given problem is called space complexity of the algorithm.

The space complexity of an algorithm quantifies the amount of space taken by an


algorithm to run as a function of the length of the input. Consider an example: Suppose a
problem to find the frequency of array elements.

It is the amount of memory needed for the completion of an algorithm.


To estimate the memory requirement we need to focus on two parts:

303
(1) A fixed part: It is independent of the input size. It includes memory for instructions
(code), constants, variables, etc.
(2) A variable part: It is dependent on the input size. It includes memory for recursion
stack, referenced variables, etc.

Example : Addition of two scalar variables


Algorithm ADD SCALAR(A, B)
//Description: Perform arithmetic addition of two numbers
//Input: Two scalar variables A and B
//Output: variable C, which holds the addition of A and B
C <— A+B
return C

The addition of two scalar numbers requires one extra memory location to hold the result. Thus
the space complexity of this algorithm is constant, hence S(n) = O(1).
The pseudo-code is as follows:

int freq[n];
int a[n];
for(int i = 0; i<n; i++)
{
cin>>a[i]; freq[a[i]]
++;
}

Below is the implementation of the above approach:

// C++ program for the above approach

#include <bits/stdc++.h>

using namespace std;

304
// Function to count frequencies of array items

void countFreq(int arr[], int n)

{
unordered_map<int, int> freq;

// Traverse through array elements and

// count frequencies

for (int i = 0; i < n; i++)

freq[arr[i]]++;

// Traverse through map and print frequencies

for (auto x : freq)

cout << x.first << " " << x.second << endl;

} // Driver Code

int main()

{ // Given array

int arr[] = { 10, 20, 20, 10, 10, 20, 5, 20 };

int n = sizeof(arr) / sizeof(arr[0]);

// Function Call

305
countFreq(arr, n);

return 0;}

Output:

51

20 4

10 3

Here two arrays of length N, and variable i are used in the algorithm so, the total
space used is N * c + N * c + 1 * c = 2N * c + c, where c is a unit space taken. For many inputs,
constant c is insignificant, and it can be said that the space complexity is O(N).
There is also auxiliary space, which is different from space complexity. The main
difference is where space complexity quantifies the total space used by the algorithm, auxiliary
space quantifies the extra space that is used in the algorithm apart from the given input. In the
above example, the auxiliary space is the space used by the freq[] array because that is not
part of the given input. So total auxiliary space is N * c + c which is O(N) only.

5.2 ASYMPTOTIC NOTATIONS

The main idea of asymptotic analysis is to have a measure of the efficiency of


algorithms that don’t depend on machine-specific constants and don’t require algorithms to be

306
implemented and time taken by programs to be compared. Asymptotic notations are
mathematical tools to represent the time complexity of algorithms for asymptotic analysis.

Asymptotic Notations:
• Asymptotic Notations are mathematical tools used to analyze the performance of
algorithms by understanding how their efficiency changes as the input size grows.
• These notations provide a concise way to express the behavior of an algorithm’s time
or space complexity as the input size approaches infinity.
• Rather than comparing algorithms directly, asymptotic analysis focuses on
understanding the relative growth rates of algorithms’ complexities.
• It enables comparisons of algorithms’ efficiency by abstracting away machine-specific
constants and implementation details, focusing instead on fundamental trends.
• Asymptotic analysis allows for the comparison of algorithms’ space and time
complexities by examining their performance characteristics as the input size varies.
• By using asymptotic notations, such as Big O, Big Omega, and Big Theta, we can
categorize algorithms based on their worst-case, best-case, or average-case time or
space complexities, providing valuable insights into their efficiency.

There are mainly three asymptotic notations:

1. Big-O Notation (O-notation)


2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
1. Theta Notation (Θ-Notation):
Theta notation encloses the function from above and below. Since it represents the
upper and the lower bound of the running time of an algorithm, it is used for analyzing
the average-case complexity of an algorithm.
Theta (Average Case) You add the running times for each possible input combination
and take the average in the average case. Let g and f be the function from the set of natural

307
numbers to itself. The function f is said to be Θ(g), if there are constants c1, c2 > 0 and a
natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0

fig.5.2 Theta Notation (Θ-Notation)

Mathematical Representation of Theta notation:


Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤ c2
* g(n) for all n ≥ n0}

Note: Θ(g) is a set

The above expression can be described as if f(n) is theta of g(n), then the value f(n)
is always between c1 * g(n) and c2 * g(n) for large values of n (n ≥ n0). The definition of theta
also requires that f(n) must be non-negative for values of n greater than n0.
The execution time serves as both a lower and upper bound on the algorithm’s
time complexity.

It exist as both, most, and least boundaries for a given input value.

A simple way to get the Theta notation of an expression is to drop low-order terms

and ignore leading constants. For example, Consider the expression 3n3 + 6n2 + 6000 =

Θ(n3), the dropping lower order terms is always fine because there will always be a

308
number(n) after which Θ(n3) has higher values than Θ(n2) irrespective of the constants

involved. For a given function g(n), we denote Θ(g(n)) is following set of functions.

Examples :

{ 100 , log (2000) , 10^4 } belongs to Θ(1)

{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)

{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2)

Note: Θ provides exact bounds.

2. Big-O Notation (O-notation):


Big-O notation represents the upper bound of the running time of an algorithm.

Therefore, it gives the worst-case complexity of an algorithm.

• It is the most widely used notation for Asymptotic analysis.

• It specifies the upper bound of a function.

• The maximum time required by an algorithm or the worst-case time complexity.

• It returns the highest possible output value(big-O) for a given input.

• Big-Oh(Worst Case) It is defined as the condition that allows an algorithm to complete

statement execution in the longest amount of time possible.

If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive

constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0

It returns the highest possible output value (big-O)for a given input.

The execution time serves as an upper bound on the algorithm’s time complexity.

309
fig.5.2 Big-O Notation (O-notation)

Mathematical Representation of Big-O Notation:

O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n)
for all n ≥ n0 }, For example, Consider the case of Insertion Sort. It takes linear time in the best
case and quadratic time in the worst case. We can safely say that the time complexity of the
Insertion sort is O(n2).

Note: O(n2) also covers linear time.

If we use Θ notation to represent the time complexity of Insertion sort, we have to use two
statements for best and worst cases:

• The worst-case time complexity of Insertion Sort is Θ(n2).

• The best case time complexity of Insertion Sort is Θ(n).

• The Big-O notation is useful when we only have an upper bou n


d on the time complexity

of an algorithm. Many times we easily find an upper bound by simply looking at the

algorithm.

Examples :

310
{ 100 , log (2000) , 10^4 } belongs to O(1)

U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)

U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2)

Note: Here, U represents union, we can write it in these manner because O provides exact
or upper bounds .

4. Omega Notation (Ω-N otation):

• Omega notation represents the lower bound of the running time of an algorithm. Thus,
it provides the best case complexity of an algorithm.
• The execution time serves as a lower bound on the algorithm’s time complexity.
• It is defined as the condition that allows an algorithm to complete statement
execution in the shortest amount of time.
Let g and f be the function from the set of natural numbers to itself. T he function f is said to be
Ω(g), if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for all n ≥ n0

ig.1.2 Omega Notation (Ω-Notation)

311
Mathematical Representation of Omega notation :

Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all
n ≥ n0 }
Let us consider the same Insertion sort example here. The time compl xity of Insertion Sort can
be written as Ω(n), but it is not very useful information about insertion sort, as we are generally
interested in worst-case and sometimes in the average case.
Examples :

{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)

U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)

U { 100 , log (2000) , 10^4 } belongs to Ω(1)

Note: Here, U represents union, we can write it in these manner because Ω provides exact
or lower bounds.

5.3 RECURRENCE RELATI ON:

A recurrence is an equation or inequality that describes a function in terms of its


values on smaller inputs. To solve a Recurrence Relation means to obtain a function defined on
the natural numbers that satisfy the recurrence.

For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is
described by the recurrence.

T (n) = θ (1) if n=1

2T + θ (n) if

n>1

There are four methods for solving Recurrence:

312
1. Substitution Method
2. Iteration Method

3. Recursion Tree Meth od

4. Master Method

5.3.1 Substitution Method:


The Substitution Method Con sists of two main steps:

• Guess the Solution.

• Use the mathematic al induction to find the boundary condition and shows that the
guess is correct.

For Example1 Solve the equation by Substitution Method.

T (n) = T +n

We have to show that it is asymptotically bound by O (log n).

Solution:

For T (n) = O (log n)

We have to show that for some constant c

1. T (n) ≤ c logn.

Put this in given Recurrence Equation.

T (n) ≤ c log +1

≤ c log + 1 = c logn - clog2 2+1

≤ c logn for c ≥ 1

Thus T (n) =O logn.

313
5.3.2 Mathematical Analysis for Recursive and Non-recursive Algorithms

Mathematical analysis of algorithms involves determining the time and space complexity,
which reflects how the algorithm's performance scales with the size of the input. This can be
done for both recursive and non-recursive algorithms, though the methods differ somewhat
due to their nature.

Recursive Algorithms:

Recursive algorithms call themselves with smaller subproblems, and the time complexity is
often expressed using recurrence relations. A recurrence relation is an equation that
recursively defines a sequence: each term is defined as a function of the preceding terms.

Example: Merge Sort

Merge sort is a classic example of a recursive algorithm. It divides the array into two halves,
recursively sorts each half, and then merges the sorted halves.

1. Divide: Split the array into two halves.


2. Conquer: Recursively sort the two halves.
3. Combine: Merge the two sorted halves to produce the sorted array.

The recurrence relation for merge sort is: T(n)=2T(n2)+O(n)T(n) = 2T\left(\frac{n}{2}\right) +


O(n)T(n)=2T(2n)+O(n)

Here, T(n)T(n)T(n) is the time complexity for sorting an array of size nnn, 2T(n2)2T\left(\frac{n}

{2}\right)2T(2n) accounts for the recursive sorting of the two halves, and O(n)O(n)O(n) is the

time taken to merge the two halves.

Using the Master Theorem for divide-and-conquer recurrences, we can solve this as follows:
T(n)=O(nlog n)T(n) = O(n \log n)T(n)=O(nlogn)

314
Non-recursive Algorithms:

Non-recursive algorithms do not call themselves; instead, they typically use loops to iterate
over the input. The time complexity is analyzed by counting the number of operations in the
worst-case scenario.

Example: Linear Search

Linear search is a simple example of a non-recursive algorithm. It searches for an element in

an array by checking each element sequentially until the desired element is found or the array

is exhausted.

1. Initialization: Start from the first element.

2. Iteration: Check each element in the array.

3. Termination: Stop if the element is found or if the end of the array is reached.

The time complexity for linear search is: T(n)=O(n)T(n) = O(n)T(n)=O(n)

This is because in the worst case, the algorithm needs to check each of the nnn elements
once.

Detailed Examples:

Example of Recursive Algorithm: Fibonacci Numbers

The Fibonacci sequence is another classic example. It is defined recursively as:

0 & \text{if } n = 0 \\

1 & \text{if } n = 1 \\

F(n-1) + F(n-2) & \text{if } n > 1

\end{cases} \]

The recurrence relation is:

315
\[ T(n) = T(n-1) + T(n-2) + O(1) \]

This recurrence is more complex to solve directly, but it is known that the time complexity of
this naive recursive approach is exponential:

\[ T(n) = O(2^n) \]

An improved approach is to use dynamic programming or memoization, which reduces the


time complexity to:

\[ T(n) = O(n) \]

#### Example of Non-recursive Algorithm: Bubble Sort

Bubble sort is a non-recursive sorting algorithm that repeatedly steps through the list,
compares adjacent elements, and swaps them if they are in the wrong order. **Pseudocode
for Bubble Sort**:

```plaintext

bubbleSort(array):

n = length(array)

for i from 0 to n-1:

for j from 0 to n-i-1:

if array[j] > array[j+1]:

swap(array[j], array[j+1])

``` The time complexity for bubble sort in the worst case is:

\[ T(n) = O(n^2) \]

This is because there are two nested loops, each running \( n \) times in the worst case.

Recursive Algorithms: Analyzed using recurrence relations. Examples include merge sort
and the naive recursive calculation of Fibonacci numbers. –

Non-recursive Algorithms: Analyzed by counting operations in loops. Examples include


linear search and bubble sort.

Understanding these methods of analysis helps in designing efficient algorithms and predicting
their performance for large inputs.

316
QUESTION BANK

PART A

1. What is the time complexity of an algorithm that runs in constant time regardless of
the input size?
a. O(n)O(n)O(n)
b. O(log n)O(\log n)O(logn)
c. O(1)O(1)O(1)
d. O(n2)O(n^2)O(n2)
2. Which notation represents the upper bound of an algorithm’s running time?
a. Big-O
b. Big-Omega
c. Big-Theta
d. Little-o
3. For which algorithm is the best case time complexity O(n)O(n)O(n)?
a. Bubble Sort
b. Insertion Sort
c. Quick Sort
d. Merge Sort
4. What is the worst case time complexity of Quick Sort?
a. O(nlog n)O(n \log n)O(nlogn)
b. O(n2)O(n^2)O(n2)
c. O(n)O(n)O(n)
d. O(log n)O(\log n)O(logn)
5. Which of the following best describes the average case time complexity of binary
search?
a. O(n)O(n)O(n)
b. O(log n)O(\log n)O(logn)
c. O(nlog n)O(n \log n)O(nlogn)
d. O(n2)O(n^2)O(n2)
6. What method is used to solve the recurrence relation T(n)=2T(n/2)+nT(n) = 2T(n/2) +
nT(n)=2T(n/2)+n?

317
a. Substitution method
b. Master Theorem
c. Iteration method
d. Induction
7. When using the substitution method to solve T(n)=T(n−1)+nT(n) = T(n-1) +
nT(n)=T(n−1)+n, what is the solution?
a. O(n)O(n)O(n)
b. O(n2)O(n^2)O(n2)
c. O(log n)O(\log n)O(logn)
d. O(nlog n)O(n \log n)O(nlogn)
8. Which of the following notations provides a tight bound on an algorithm’s running
time?
a. Big-O
b. Big-Omega
c. Big-Theta
d. Little-o
9. What is the space complexity of an in-place sorting algorithm?
a. O(1)O(1)O(1)
b. O(n)O(n)O(n)
c. O(nlog n)O(n \log n)O(nlogn)
d. O(n2)O(n^2)O(n2)
10. What is the time complexity of the recursive Fibonacci algorithm
F(n)=F(n−1)+F(n−2)F(n) = F(n-1) + F(n-2)F(n)=F(n−1)+F(n−2) without memoization?
a. O(n)O(n)O(n)
b. O(2n)O(2^n)O(2n)
c. O(log n)O(\log n)O(logn)
d. O(nlog n)O(n \log n)O(nlogn)
11. What is the time complexity of the iterative binary search algorithm?
a. O(n)O(n)O(n)
b. O(log n)O(\log n)O(logn)
c. O(nlog n)O(n \log n)O(nlogn)

318
d. O(n2)O(n^2)O(n2)
12. Using the substitution method, what is the solution to the recurrence
T(n)=2T(n/2)+nT(n) = 2T(n/2) + nT(n)=2T(n/2)+n?
a. O(nlog n)O(n \log n)O(nlogn)
b. O(n)O(n)O(n)
c. O(n2)O(n^2)O(n2)
d. O(log n)O(\log n)O(logn)
13. For the recurrence relation T(n)=T(n/2)+O(1)T(n) = T(n/2) + O(1)T(n)=T(n/2)+O(1),
what is the time complexity?
a. O(n)O(n)O(n)
b. O(nlog n)O(n \log n)O(nlogn)
c. O(log n)O(\log n)O(logn)
d. O(n2)O(n^2)O(n2)
14. What is the time complexity of a nested loop where the outer loop runs nnn times
and the inner loop runs nnn times?
a. O(n)O(n)O(n)
b. O(n2)O(n^2)O(n2)
c. O(log n)O(\log n)O(logn)
d. O(nlog n)O(n \log n)O(nlogn)
15. What is the best case time complexity of Quick Sort when the pivot element is
always the median?
a. O(n)O(n)O(n)
b. O(n2)O(n^2)O(n2)
c. O(log n)O(\log n)O(logn)
d. O(nlog n)O(n \log n)O(nlogn)
16. The notation describes the upper bound of an algorithm's running time,
providing a worst-case scenario.
Answer: Big-O
17. Space complexity refers to the amount of memory an algorithm needs to
run to completion.
Answer: auxiliary

319
18. The best case complexity of an algorithm is the minimum time it takes to complete
when the input is in the most condition.
Answer: favorable
19. The case complexity of an algorithm is the maximum time it takes to
complete, regardless of the input conditions.
Answer: worst
20. The average case complexity gives an estimate of the running time for a typical or
input.
Answer: random
21. The substitution method involves guessing the form of the solution and then using
to find the constants and prove the solution.
Answer: induction
22. Time complexity is a measure of the amount of an algorithm takes to
process an input of a given size.
Answer: time
23. Theta notation (Θ\ThetaΘ) provides a tight bound on the growth rate of an
algorithm's running time, meaning it gives both and bounds.
Answer: upper, lower
24. The time complexity of a recursive algorithm is often described using
relations.
PART B

1. How does the substitution method work for solving recurrence relations?
2. Define Big-Theta (Θ) notation and its significance.
3. Give an example of an algorithm and describe its best-case and worst-case
scenarios.
4. What is the time complexity of the recursive Fibonacci sequence without
memoization?
5. List the advantages of Divide and Conquer Technique. ( April/May 2023)
6. Compare mathematical analysis and empirical analysis.( April/May 2023)
7. Define best, worst and average time complexity. ( April/May 2024)
8. Differentiate time complexity and space complexity. (Nov/Dec 2024)

320
PART C

1. Discuss the importance of time and space complexity in algorithm analysis.


2. Provide examples of algorithms and analyze their time and space complexities.
3. Define Big-O, Big-Theta, and Big-Omega notations.
4. Explain the properties of each notation with examples.
5. Define best case, worst case, and average case complexities.
6. Provide examples of algorithms and explain how their performance varies in different
cases.
7. Provide a step-by-step solution for a given recurrence relation using the substitution
method.
8. Explain the significance of solving recurrence relations in the analysis of recursive
algorithms.
9. Explain the process of analyzing recursive algorithms..
10. Provide a detailed analysis of a non-recursive algorithm such as bubble sort or linear
search.
11. Define and explain Big-O, Big-Theta, and Big-Omega notations.
12. Provide practical examples of algorithms and their asymptotic analysis.
13. Provide a detailed analysis of the algorithm’s performance in the best case, worst
case, and average case scenarios.
14. Provide examples of recurrence relations that can be solved using the Master
Theorem.
15. Outline the Big O notation, Big Omega Notation.( April/May 2023)
16. Explain how you analyze the efficiency of non recursive algorithms.( April/May 2023)
17. Apply the mathematical analysis for analyzing the efficiency of recursive algorithms
for finding the factorial of a number.( April/May 2023)
18. Depict the Big O Notation, Big Omega and Theta Notation graphically and explain
with an example. (Nov/Dec 2024)
19. Compare and contrast various asymptotic notations and its properties. (Nov/Dec
2024)
20. Examine the mathematical analysis for non-recursive algorithm. Illustrate with
suitable example. (Nov/Dec 2024)

321

You might also like