DS Unit 2
DS Unit 2
Stack is an abstract data type with a bounded (predefined) capacity. It is a simple data structure that
allows adding and removing elements in a particular order. Every time an element is added, it goes on
the top of the stack, the only element that can be removed is the element that was at the top of the
stack, just like a pile of objects.
3. push() function is used to insert new elements into the Stack and pop() is used to delete an
element from the stack. Both insertion and deletion are allowed at only one end of Stack called
Top.
4. Stack is said to be in Overflow state when it is completely full and is said to be in
Underflow state if it is completely empty.
Applications of Stack
The simplest application of a stack is to reverse a word. You push a given word to stack - letter by
letter - and then pop letters from the stack.
There are other uses also like : Parsing, Expression Conversion(Infix to Postfix, Postfix to Prefix etc)
and many more.
Implementation of Stack
Stack can be easily implemented using an Array or a Linked List. Arrays are quick, but are limited in
size and Linked List requires overhead to allocate, link, unlink, and deallocate, but is not limited in
size. Here we will implement Stack using array.
int s[size];
int top;
}st;
1. We have created „stack‟ structure.
A stack data structure can be implemented using one dimensional array. But stack implemented using
array, can store only fixed number of data values. This implementation is very simple, just define a one
dimensional array of specific size and insert or delete the values into that array by using LIFO
principle with the help of a variable 'top'. Initially top is set to -1. Whenever we want to insert a value
into the stack, increment the top value by one and then insert. Whenever we want to delete a value
from the stack, then delete the top value and decrement the top value by one.
Stack Operations using Array
Before implementing actual operations, first follow the below steps to create an empty stack.
Step 1: Include all the header files which are used in the program and define a constant
'SIZE' with specific value.
Step 2: Declare all the functions used in stack implementation.
Step 3: Create a one dimensional array with fixed size (int stack[SIZE])
Step 4: Define a integer variable 'top' and initialize with '-1'. (int top = -1)
Step 5: In main method display menu with list of operations and make suitable function calls to
perform operation selected by the user on the stack.
In a stack, push() is a function used to insert an element into the stack. In a stack, the new element is
always inserted at top position. Push function takes one integer value as parameter and inserts that
value into the stack. We can use the following steps to push an element on to the stack...
Step 1: Check whether stack is FULL. (top == SIZE-1)
Step 2: If it is FULL, then display "Stack is FULL!!! Insertion is not possible!!!" and
terminate the function.
Step 3: If it is NOT FULL, then increment top value by one (top++) and set stack[top] to
value (stack[top] = value).
In a stack, pop() is a function used to delete an element from the stack. In a stack, the element is
always deleted from top position. Pop function does not take any value as parameter. We can use the
following steps to pop an element from the stack...
Step 1: Check whether stack is EMPTY. (top == -1)
Step 2: If it is EMPTY, then display "Stack is EMPTY!!! Deletion is not possible!!!" and
terminate the function.
Step 3: If it is NOT EMPTY, then delete stack[top] and decrement top value by one (top--
).
Step 2: If it is EMPTY, then display "Stack is EMPTY!!!" and terminate the function.
Step 3: If it is NOT EMPTY, then define a variable 'i' and initialize with top. Display
stack[i] value and decrement i value by one (i--).
Step 3: Repeat above step until i value becomes '0'.
Example:
Let the given expression be “2 3 1 * + 9 -“. We scan all elements one by one.
2) Scan „3‟, again a number, push it to stack, stack now contains „2 3′ (from bottom to top)
4) Scan „*‟, it‟s an operator, pop two operands from stack, apply the * operator on operands,
we get 3*1 which results in 3. We push the result „3‟ to stack. Stack now becomes „2 3′.
5) Scan „+‟, it‟s an operator, pop two operands from stack, apply the + operator on
operands, we get 3 + 2 which results in 5. We push the result „5‟ to stack. Stack now
becomes „5‟.
6) Scan „9‟, it‟s a number, we push it to the stack. Stack now becomes „5 9′.
7) Scan „-„, it‟s an operator, pop two operands from stack, apply the – operator on operands,
we get 5 – 9 which results in -4. We push the result „-4′ to stack. Stack now becomes „-4′.
8) There are no more elements to scan, we return the top element from stack (which is the only
element left in stack)
1. Balancing Symbols
This program uses stack to check balanced expression. The expression is parsed character by character
and when opening bracket is found, it is inserted into stack. When the equivalent closing bracket is
found, the stack is emptied. If the stack is empty after parsing the entire expression, then the
expression is balanced expression.
Algorithm:
2. For closing parenthesis, check what is at the top of the stack, if it corresponding opening
parenthesis, remove it from the top of the stack.
3. If parenthesis at the top of the stack is not corresponding opening parenthesis, return false, as there
is no point check the string further.
Example
(()(()))
()(())) ( PUSH ((
)(())) ) POP (
(())) ( PUSH ((
())) ( PUSH (((
))) ) POP ((
)) ) POP (
) ) POP Empty (Accepted)
3. Recursion
This implies, the caller function has to suspend its execution temporarily and resume later when the
execution control returns from callee function. Here, caller function needs to start exactly from the
point of execution where it put itself on hold. It also needs the exact same data values it was
working on. For this purpose an activation record (or stack frame) is created for caller function.
This activation record keeps the information about local variables, formal parameters, return address
and all information passed to called function.
4. Towers of Hanoi
Tower of Hanoi, is a mathematical puzzle which consists of three tower (pegs) and more than one
rings; as depicted below −
These rings are of different sizes and stacked upon in ascending order i.e. the smaller one sits over the
larger one. There are other variations of puzzle where the number of disks increase, but the tower
count remains the same.
Rules
The mission is to move all the disks to some another tower without violating the sequence of
arrangement. The below mentioned are few rules which are to be followed for tower of hanoi −
Only one disk can be moved among the towers at any given time.
Here is an animated representation of solving a tower of hanoi puzzle with three disks −
Tower of hanoi puzzle with n disks can be solved in minimum 2n−1 steps. This presentation shows
that a puzzle with 3 disks has taken 23−1 = 7 steps.
Algorithm
To write an algorithm for Tower of Hanoi, first we need to learn how to solve this problem with lesser
amount of disks, say → 1 or 2. We mark three towers with name, source, destination and aux
(only to help moving disks). If we have only one disk, then it can easily be moved from source to
destination peg.
If we have 2 disks −
So now we are in a position to design algorithm for Tower of Hanoi with more than two disks. We
divide the stack of disks in two parts. The largest disk (nthdisk) is in one part and all other (n-1) disks
are in second part.
Our ultimate aim is to move disk n from source to destination and then put all other (n-
1) disks onto it. Now we can imagine to apply the same in recursive way for all given set of disks. So
steps to follow are −
START
Procedure Hanoi(disk, source, dest, aux)
IF disk == 0, THEN
move disk from source to dest ELSE
Hanoi(disk - 1, source, aux, dest) // Step 1
move disk from source to dest // Step 2
Hanoi(disk - 1, aux, dest, source) // Step 3
END IF
END Procedure
STOP
QUEUE
Queue is a linear data structure in which the insertion and deletion operations are performed at two
different ends. In a queue data structure, adding and removing of elements are performed at two
different positions. The insertion is performed at one end and deletion is performed at other end. In a
queue data structure, the insertion operation is performed at a position which is known as 'rear' and
the deletion operation is performed at a position which is known as 'front'. In queue data structure, the
insertion and deletion operations are performed based on FIFO (First In First Out) principle.
In a queue data structure, the insertion operation is performed using a function called "enQueue()"
Queue data structure is a linear data structure in which the operations are performed based on
FIFO principle.
A queue can also be defined as
"Queue data structure is a collection of similar data items in which insertion and deletion
operations are performed based on FIFO principle".
Example
Queue data structure can be implemented in two ways. They are as follows...
1. Using Array
When a queue is implemented using array, that queue can organize only limited number of elements.
When a queue is implemented using linked list, that queue can organize unlimited number of
elements.
Applications of Queue:
Queue is used when things don‟t have to be processed immediatly, but have to be processed in
First InFirst Out order like Breadth First Search. This property of Queue makes it also useful in
following kind of scenarios.
1) When a resource is shared among multiple consumers. Examples include CPU scheduling, Disk
Scheduling.
2) When data is transferred asynchronously (data not necessarily received at same rate as sent)
between two processes. Examples include IO Buffers, pipes, file IO, etc.
A queue data structure can be implemented using one dimensional array. But, queue implemented
using array can store only fixed number of data values. The implementation of queue data structure
using array is very simple, just define a one dimensional array of specific size and insert or delete the
values into that array by using FIFO (First In First Out) principle with the help of variables 'front'
and 'rear'. Initially both 'front' and 'rear' are set to -1. Whenever, we want to insert a new value into
the queue, increment 'rear' value by one and then insert at that position. Whenever we want to delete a
value from the queue, then increment 'front' value by one and then display the value at 'front' position
as deleted element.
Before we implement actual operations, first follow the below steps to create an empty queue.
Step 1: Include all the header files which are used in the program and define a constant
'SIZE' with specific value.
Step 2: Declare all the user defined functions which are used in queue
implementation.
Step 3: Create a one dimensional array with above defined SIZE (int queue[SIZE])
Step 4: Define two integer variables 'front' and 'rear' and initialize both with '-1'. (int front
= -1, rear = -1)
Step 5: Then implement main method by displaying menu of operations list and make
suitable function calls to perform operation selected by the user on queue.
In a queue data structure, enQueue() is a function used to insert a new element into the queue. In a
queue, the new element is always inserted at rear position. The enQueue() function takes one
integer value as parameter and inserts that value into the queue.
Step 3: If it is NOT FULL, then increment rear value by one (rear++) and set
queue[rear] = value.
In a queue data structure, deQueue() is a function used to delete an element from the queue. In a
queue, the element is always deleted from front position. The deQueue() function does not take
any value as parameter. We can use the following steps
Step 3: If it is NOT EMPTY, then increment the front value by one (front ++).
Then display queue[front] as deleted element. Then check whether both front
and rear are equal (front == rear), if it TRUE, then set both front and rear to
'-1' (front = rear = -1).
Step 3: If it is NOT EMPTY, then define an integer variable 'i' and set 'i = front+1'.
Step 3: Display 'queue[i]' value and increment 'i' value by one (i++). Repeat the same until
'i' value is equal to rear (i <= rear)
Circular Queue
In a standard queue data structure re-buffering problem occurs for each dequeue operation. To solve
this problem by joining the front and rear ends of a queue to make the queue as a circular queue.
Circular queue is a linear data structure. It follows FIFO principle.
In circular queue the last node is connected back to the first node to make a circle.
Elements are added at the rear end and the elements are deleted at front end of the
queue
Both the front and the rear pointers points to the beginning of the array.
· Using arrays
In a normal Queue Data Structure, we can insert elements until queue becomes full. But once if
queue becomes full, we can not insert the next element until all the elements are deleted from the
queue. For example consider the queue below...
This situation also says that Queue is Full and we can not insert the new element because, 'rear' is still
at last position. In above situation, even though we have empty positions in the queue we can not make
use of them to insert new element. This is the major problem in normal queue data structure. To
overcome this problem we use circular queue data structure.
A Circular Queue can be defined as follows...
Circular Queue is a linear data structure in which the operations are performed based on FIFO (First In
First Out) principle and the last position is connected back to the first position to make a circle.
Graphical representation of a circular queue is as follows...
Implementation of Circular Queue
To implement a circular queue data structure using array, we first perform the following steps before
we implement actual operations.
Step 1: Include all the header files which are used in the program and define a constant
'SIZE' with specific value.
Step 2: Declare all user defined functions used in circular queue implementation.
Step 3: Create a one dimensional array with above defined SIZE (int cQueue[SIZE])
Step 4: Define two integer variables 'front' and 'rear' and initialize both with '-1'. (int front =
-1, rear = -1)
Step 5: Implement main method by displaying menu of operations list and make suitable
function calls to perform operation selected by the user on circular queue.
In a circular queue, enQueue() is a function which is used to insert an element into the circular queue.
In a circular queue, the new element is always inserted at rear position. The enQueue() function takes
one integer value as parameter and inserts that value into the circular queue. We can use the following
steps to insert an element into the circular queue...
Step 1: Check whether queue is FULL. ((rear == SIZE-1 && front == 0) || (front == rear+1))
Step 2: If it is FULL, then display "Queue is FULL!!! Insertion is not possible!!!" and
terminate the function.
Step 3: If it is NOT FULL, then check rear == SIZE - 1 && front != 0 if it is TRUE, then set
rear = -1.
Step 4: Increment rear value by one (rear++), set queue[rear] = value and check 'front == -1'
if it is TRUE, then set front = 0.
In a circular queue, deQueue() is a function used to delete an element from the circular queue. In a
circular queue, the element is always deleted from front position. The
deQueue() function doesn't take any value as parameter. We can use the following steps to delete an
element from the circular queue...
Step 1: Check whether queue is EMPTY. (front == -1 && rear == -1)
We can use the following steps to display the elements of a circular queue...
Step 2: If it is EMPTY, then display "Queue is EMPTY!!!" and terminate the function.
Step 3: If it is NOT EMPTY, then define an integer variable 'i' and set 'i = front'.
Step 4: Check whether 'front <= rear', if it is TRUE, then display 'queue[i]' value and
increment 'i' value by one (i++). Repeat the same until 'i <= rear'
becomes FALSE.
Step 5: If 'front <= rear' is FALSE, then display 'queue[i]' value and increment 'i' value by
one (i++). Repeat the same until'i <= SIZE - 1' becomes FALSE.
Step 6: Set i to 0.
Step 7: Again display 'cQueue[i]' value and increment i value by one (i++). Repeat the same
until 'i <= rear' becomes FALSE.
Applications of QUEUE
1. Priority queue
Priority Queue
In normal queue data structure, insertion is performed at the end of the queue and deletion is
performed based on the FIFO principle. This queue implementation may not be suitable for all
situations.
Consider a networking application where server has to respond for requests from multiple clients using
queue data structure. Assume four requests arrived to the queue in the order of R1 requires 20 units of
time, R2 requires 2 units of time, R3 requires 10 units of time and R4 requires 5 units of time. Queue is
as follows...
1. R1 : 20 units of time
2. R2 : 22 units of time (R2 must wait till R1 complete - 20 units and R2 itself requeres 2 units.
Total 22 units)
3. R3 : 32 units of time (R3 must wait till R2 complete - 22 units and R3 itself requeres 10 units.
Total 32 units)
4. R4 : 37 units of time (R4 must wait till R3 complete - 35 units and R4 itself requeres 5 units.
Total 37 units)
Here, average waiting time for all requests (R1, R2, R3 and R4) is (20+22+32+37)/4 ≈ 27 units
of time.
That means, if we use a normal queue data structure to serve these requests the average waiting time
for each request is 27 units of time.
Now, consider another way of serving these requests. If we serve according to their required amount of
time. That means, first we serve R2 which has minimum time required (2) then serve R4 which has
second minimum time required (5) then serve R3 which has third minimum time required (10) and
finnaly R1 which has maximum time required (20).
1. R2 : 2 units of time
2. R4 : 7 units of time (R4 must wait till R2 complete 2 units and R4 itself requeres 5 units. Total
7 units)
3. R3 : 17 units of time (R3 must wait till R4 complete 7 units and R3 itself requeres 10 units.
Total 17 units)
4. R1 : 37 units of time (R1 must wait till R3 complete 17 units and R1 itself requeres 20 units.
Total 37 units)
Here, average waiting time for all requests (R1, R2, R3 and R4) is (2+7+17+37)/4 ≈ 15 units
of time.
From above two situations, it is very clear that, by using second method server can complete all four
requests with very less time compared to the first method. This is what exactly done by the priority
queue.
Priority queue is a variant of queue data structure in which insertion is performed in the order of
arrival and deletion is performed based on the priority.
2) An element with high priority is dequeued before an element with low priority.
3) If two elements have the same priority, they are served according to their order in the queue.
2. Using an Unordered Array (Dynamic Array) with the index of the maximum value
6. Using Unordered Linked List with reference to node with the maximum value
Circular Queue
In a standard queue data structure re-buffering problem occurs for each dequeue operation. To solve
this problem by joining the front and rear ends of a queue to make the queue as a circular queue.
Circular queue is a linear data structure. It follows FIFO principle.
In circular queue the last node is connected back to the first node to make a circle.
Elements are added at the rear end and the elements are deleted at front end of the
queue
Both the front and the rear pointers points to the beginning of the array.
· Using arrays
In a normal Queue Data Structure, we can insert elements until queue becomes full. But once if
queue becomes full, we can not insert the next element until all the elements are deleted from the
queue. For example consider the queue below...
Now consider the following situation after deleting three elements from the queue...
This situation also says that Queue is Full and we can not insert the new element because, 'rear' is still
at last position. In above situation, even though we have empty positions in the queue we can not make
use of them to insert new element. This is the major problem in normal queue data structure. To
overcome this problem we use circular queue data structure.
A Circular Queue can be defined as follows...
Circular Queue is a linear data structure in which the operations are performed based on FIFO (First In
First Out) principle and the last position is connected back to the first position to make a circle.
Graphical representation of a circular queue is as follows...
Implementation of Circular Queue
To implement a circular queue data structure using array, we first perform the following steps before
we implement actual operations.
Step 1: Include all the header files which are used in the program and define a constant
'SIZE' with specific value.
Step 2: Declare all user defined functions used in circular queue implementation.
Step 3: Create a one dimensional array with above defined SIZE (int cQueue[SIZE])
Step 4: Define two integer variables 'front' and 'rear' and initialize both with '-1'. (int front =
-1, rear = -1)
Step 5: Implement main method by displaying menu of operations list and make suitable
function calls to perform operation selected by the user on circular queue.
In a circular queue, enQueue() is a function which is used to insert an element into the circular queue.
In a circular queue, the new element is always inserted at rear position. The enQueue() function takes
one integer value as parameter and inserts that value into the circular queue. We can use the following
steps to insert an element into the circular queue...
Step 1: Check whether queue is FULL. ((rear == SIZE-1 && front == 0) || (front == rear+1))
Step 2: If it is FULL, then display "Queue is FULL!!! Insertion is not possible!!!" and
terminate the function.
Step 3: If it is NOT FULL, then check rear == SIZE - 1 && front != 0 if it is TRUE, then set
rear = -1.
Step 4: Increment rear value by one (rear++), set queue[rear] = value and check 'front == -1'
if it is TRUE, then set front = 0.
In a circular queue, deQueue() is a function used to delete an element from the circular queue. In a
circular queue, the element is always deleted from front position. The
deQueue() function doesn't take any value as parameter. We can use the following steps to delete an
element from the circular queue...
Step 1: Check whether queue is EMPTY. (front == -1 && rear == -1)
We can use the following steps to display the elements of a circular queue...
Step 2: If it is EMPTY, then display "Queue is EMPTY!!!" and terminate the function.
Step 3: If it is NOT EMPTY, then define an integer variable 'i' and set 'i = front'.
Step 4: Check whether 'front <= rear', if it is TRUE, then display 'queue[i]' value and
increment 'i' value by one (i++). Repeat the same until 'i <= rear'
becomes FALSE.
Step 5: If 'front <= rear' is FALSE, then display 'queue[i]' value and increment 'i' value by
one (i++). Repeat the same until'i <= SIZE - 1' becomes FALSE.
Step 6: Set i to 0.
Step 7: Again display 'cQueue[i]' value and increment i value by one (i++). Repeat the same
until 'i <= rear' becomes FALSE.
Applications of QUEUE
3. Priority queue
Priority Queue
In normal queue data structure, insertion is performed at the end of the queue and deletion is
performed based on the FIFO principle. This queue implementation may not be suitable for all
situations.
Consider a networking application where server has to respond for requests from multiple clients using
queue data structure. Assume four requests arrived to the queue in the order of R1 requires 20 units of
time, R2 requires 2 units of time, R3 requires 10 units of time and R4 requires 5 units of time. Queue is
as follows...
5. R1 : 20 units of time
6. R2 : 22 units of time (R2 must wait till R1 complete - 20 units and R2 itself requeres 2 units.
Total 22 units)
7. R3 : 32 units of time (R3 must wait till R2 complete - 22 units and R3 itself requeres 10 units.
Total 32 units)
8. R4 : 37 units of time (R4 must wait till R3 complete - 35 units and R4 itself requeres 5 units.
Total 37 units)
Here, average waiting time for all requests (R1, R2, R3 and R4) is (20+22+32+37)/4 ≈ 27 units
of time.
That means, if we use a normal queue data structure to serve these requests the average waiting time
for each request is 27 units of time.
Now, consider another way of serving these requests. If we serve according to their required amount of
time. That means, first we serve R2 which has minimum time required (2) then serve R4 which has
second minimum time required (5) then serve R3 which has third minimum time required (10) and
finnaly R1 which has maximum time required (20).
5. R2 : 2 units of time
6. R4 : 7 units of time (R4 must wait till R2 complete 2 units and R4 itself requeres 5 units. Total
7 units)
7. R3 : 17 units of time (R3 must wait till R4 complete 7 units and R3 itself requeres 10 units.
Total 17 units)
8. R1 : 37 units of time (R1 must wait till R3 complete 17 units and R1 itself requeres 20 units.
Total 37 units)
Here, average waiting time for all requests (R1, R2, R3 and R4) is (2+7+17+37)/4 ≈ 15 units
of time.
From above two situations, it is very clear that, by using second method server can complete all four
requests with very less time compared to the first method. This is what exactly done by the priority
queue.
Priority queue is a variant of queue data structure in which insertion is performed in the order of
arrival and deletion is performed based on the priority.
5) An element with high priority is dequeued before an element with low priority.
6) If two elements have the same priority, they are served according to their order in
the queue.
8. Using an Unordered Array (Dynamic Array) with the index of the maximum value
9. Using an Array (Dynamic Array) in Decreasing Order
12. Using Unordered Linked List with reference to node with the maximum value
A tree is also one of the data structures that represent hierarchical data. Suppose we want to show
the employees and their positions in the hierarchical form then it can be represented as shown
below:
The above tree shows the organization hierarchy of some company. In the above
structure, john is the CEO of the company, and John has two direct reports named
as Steve and Rohan. Steve has three direct reports named Lee, Bob, Ella where Steve is a
manager. Bob has two direct reports named Sal and Emma. Emma has two direct reports
named Tom and Raj. Tom has one direct report named Bill. This particular logical structure is
known as a Tree. Its structure is similar to the real tree, so it is named a Tree. In this structure,
the root is at the top, and its branches are moving in a downward direction. Therefore, we can
say that the Tree data structure is an efficient way of storing the data in a hierarchical way.
In the above structure, each node is labeled with some number. Each arrow shown in the above
figure is known as a link between the two nodes.
o Root: The root node is the topmost node in the tree hierarchy. In other words, the root
node is the one that doesn't have any parent. In the above structure, node numbered 1
is the root node of the tree. If a node is directly linked to some other node, it would be
called a parent-child relationship.
o Child node: If the node is a descendant of any node, then the node is known as a child
node.
o Parent: If the node contains any sub-node, then that node is said to be the parent of that
sub-node.
o Sibling: The nodes that have the same parent are known as siblings.
o Leaf Node:- The node of the tree, which doesn't have any child node, is called a leaf
node. A leaf node is the bottom-most node of the tree. There can be any number of leaf
nodes present in a general tree. Leaf nodes can also be called external nodes.
o Internal nodes: A node has atleast one child node known as an internal
o Ancestor node:- An ancestor of a node is any predecessor node on a path from the root
to that node. The root node doesn't have any ancestors. In the tree shown in the above
image, nodes 1, 2, and 5 are the ancestors of node 10.
o Descendant: The immediate successor of the given node is known as a descendant of a
node. In the above figure, 10 is the descendant of node 5.
Properties of Tree data structure
o Recursive data structure: The tree is also known as a recursive data structure. A tree
can be defined as recursively because the distinguished node in a tree data structure is
known as a root node. The root node of the tree contains a link to all the roots of its
subtrees. The left subtree is shown in the yellow color in the below figure, and the right
subtree is shown in the red color. The left subtree can be further split into subtrees shown
in three different colors. Recursion means reducing something in a self-similar manner.
So, this recursive property of the tree data structure is implemented in various
applications.
o Number of edges: If there are n nodes, then there would n-1 edges. Each arrow in the
structure represents the link or path. Each node, except the root node, will have atleast
one incoming link known as an edge. There would be one link for the parent-child
relationship.
o Depth of node x: The depth of node x can be defined as the length of the path from the
root to the node x. One edge contributes one-unit length in the path. So, the depth of node
x can also be defined as the number of edges between the root node and the node x. The
root node has 0 depth.
o Height of node x: The height of node x can be defined as the longest path from the node
x to the leaf node.
Based on the properties of the Tree data structure, trees are classified into various categories.
Applications of trees
Binary Tree
Binary Search Tree (BST)
AVL Tree
B-Tree
Binary Tree
Binary tree is a tree data structure in which each node can have 0, 1, or 2 children – left
and right child.
Properties of a Binary tree
Full binary tree: Every parent node or an internal node has either exactly two children or no
child nodes.
Complete binary tree: All levels except the last one are full with nodes.
A binary search tree (BST) is also called an ordered or sorted binary tree in which the value at
the left sub tree is lesser than that of root and right subtree has value greater than that of root.
Every binary search tree is a binary tree. However, not every binary tree is a binary search tree.
What‟s the difference between a binary tree and binary search tree? The most important
difference between the two is that in a BST, the left child node‟s value must be less than the
parent while the right child node‟s value must be higher.
Used to efficiently store data in sorted form in order to access and search stored elements
quickly.
Given „A‟ a sorted array, find out how many times x occurs in „A‟.
Player „A‟ chooses a secret number „n‟. Player „B‟ can guess a number x and A replies how x
AVL Tree
AVL trees are a special kind of self-balancing binary search tree where the height of the left
subtree and right subtree of every node differs by at most one.
The heights of the two child subtrees of any node differ by at most one.
Balance Factor = (Height of Left Subtree – Height of Right Subtree).
-1 Balance factor represents that the right subtree is one level higher than the left subtree.
0 Balance factor represents that the height of the left subtree is equal to the height of the right
subtree.
1 Balance factor means that the left subtree is one level higher than the right sub tree.
Also Read: Top Online Courses to Learn Data Structures and Algorithms
B-Tree
B tree is a self-balancing search tree wherein each node can contain more than one key and more
than two children. It is a special type of m way tree and a generalized type of binary search tree.
B-tree can store many keys in a single node and can have multiple child nodes. This reduces the
height and enables faster disk accesses.
Properties of a B-Tree
Application of B-trees