0% found this document useful (0 votes)
40 views44 pages

Question Bank Ds1

The document is a study material for a Data Structure course, specifically focusing on trees and their various types and operations. It includes questions and answers on tree properties, traversal methods, binary search trees, expression trees, and B+ trees, among other topics. Additionally, it discusses the applications and advantages of different tree structures in computer science.

Uploaded by

nallamanibca
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views44 pages

Question Bank Ds1

The document is a study material for a Data Structure course, specifically focusing on trees and their various types and operations. It includes questions and answers on tree properties, traversal methods, binary search trees, expression trees, and B+ trees, among other topics. Additionally, it discusses the applications and advantages of different tree structures in computer science.

Uploaded by

nallamanibca
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 44

SRI RAM NALLAMANI YADAVA COLLEGE OF ARTS AND SCIENCE

DEPARTMENT OF COMPUTER APPLICATIONS


DATA STRUCTURE
BCA-II
UNIT-III

By,
S.THARSANA
DEPARTMENT OF BCA
PART-A
1. The number of edges from the root to the node is called __________ of the
tree.
a) Height
b) Depth
c) Length
d) Width
Answer:B
2. The number of edges from the node to the deepest leaf is called _________ of
the tree.
a) Height
b) Depth
c) Length
d) Width
Answer:A

3. What is a full binary tree?


a) Each node has exactly zero or two children
b) Each node has exactly two children
c) All the leaves are at the same level
d) Each node has exactly one or two children

Answer:A

4. What is the average case time complexity for finding the height of the binary
tree?
a) h = O(loglogn)
b) h = O(nlogn)
c) h = O(n)
d) h = O(log n)

Answer:D

6. Which of the following is not an advantage of trees?


a) Hierarchical structure
b) Faster search
c) Router algorithms
d) Undo/Redo operations in a notepad

Answer:D
7. In a full binary tree if number of internal nodes is I, then number of leaves L
are?
a) L = 2*I
b) L = I + 1
c) L = I – 1
d) L = 2*I – 1

Answer:B
8. Construct a binary search tree by using postorder sequence given below.
Postorder: 2, 4, 3, 7, 9, 8, 5.

a) b)

d)
c)

Answer:B

9. The data structure required to check whether an expression contains a


balanced parenthesis is?
a) Stack
b) Queue
c) Array
d) Tree
Answer:A
10. The process of accessing data stored in a serial access memory is similar to
manipulating data on a ________
a) Heap
b) Binary Tree
c) Array
d) Stack

Answer:D

PART-B

1.What is Tree Traversal with example?

Tree traversal refers to the process of visiting (checking and/or updating) each
node in a tree data structure exactly once in a systematic way. There are two
main types of tree traversal:

1. Depth-First Traversal (DFT)

This explores as far as possible along each branch before backtracking.

Common types:

 Inorder (Left, Root, Right)


 Preorder (Root, Left, Right)
 Postorder (Left, Right, Root)

2. Breadth-First Traversal (BFT) (also called Level Order Traversal)

This visits all the nodes at each level before going to the next level.

Example Tree

A
/\
B C
/\ \
D E F

Traversals:
1. Inorder (Left, Root, Right):
D→B→E→A→C→F

2. Preorder (Root, Left, Right):


A→B→D→E→C→F

3. Postorder (Left, Right, Root):


D→E→B→F→C→A

4. Level Order (Breadth-First):


A→B→C→D→E→F

2.What is Binary Tree ADT ?

Definition of a binary tree: A binary tree is a finite set of nodes which is


either empty or consists of a root and two disjoint binary trees called the left
subtree and right subtree.

• The binary tree can be as shown below -

Abstract DataType BinT (node * root)

Instances

: Binary tree is a nonlinear data structure which contains every node except the
leaf nodes at most two child nodes.

Operations:

1. Insertion :

This operation is used to insert the nodes in the binary tree. By inserting desired
number of nodes, the binary tree gets created.
2. Deletion :

This operation is used to remove any node from the tree. Note that if root node
is removed the tree becomes empty.

3.What is Expression Tree?

Expression Trees are a type of binary tree data structure used to represent
mathematical expressions. Each node in the tree represents an operator or
operand, and the tree structure reflects the order of operations.

Components of an Expression Tree:

1. Internal nodes: Represent operators (e.g., +, -, *, /) and have two children


(left and right operands).

2. Leaf nodes: Represent operands (e.g., variables or constants) and have no


children.

Constructing an Expression Tree:

1. Parse the expression: Break down the mathematical expression into its
constituent parts (operators and operands).

2. Create nodes: Create nodes for each operator and operand, and link them
together based on the order of operations.

Evaluating an Expression Tree:

1. Recursively traverse the tree: Traverse the tree in a specific order (e.g.,
postorder) and evaluate each node.
2. Apply operators: Apply the operators at each internal node to the results of
the left and right subtrees.

Advantages of Expression Trees:

1. Efficient evaluation: Expression trees allow for efficient evaluation of


mathematical expressions.

2. Easy manipulation: Expression trees can be easily manipulated and


transformed, making them useful for tasks like optimization and simplification.

Use Cases for Expression Trees:

1. Compilers: Expression trees are used in compilers to represent and evaluate


mathematical expressions in source code.

2. Calculators: Expression trees can be used in calculators to evaluate


mathematical expressions.

3. Symbolic computation: Expression trees are used in symbolic computation


systems to manipulate and evaluate mathematical expressions.

Example:

Suppose we have the expression (3 + 4) * 2. The expression tree would be:

/\

+ 2
/\

3 4

Evaluating this tree would involve recursively traversing the tree, applying the
operators, and returning the result (14).

4.Applications of trees?

Trees are a fundamental data structure in computer science, and they have
numerous applications in various fields. Here are some of the key applications
of trees in data structure:

1. File Systems: Trees are used to represent the hierarchical structure of files
and directories in file systems.

2. Database Indexing: Trees are used in database indexing to speed up query


performance by allowing efficient lookup, insertion, and deletion of data.

3. Compilers: Trees are used in compilers to represent the parse trees of source
code, which are used to analyze the syntax and semantics of the code.

4. Data Compression: Trees are used in data compression algorithms, such as


Huffman coding, to represent the frequency of symbols in a dataset.

5. Webpage Navigation: Trees are used in webpage navigation to represent the


hierarchical structure of webpages and links.

6. XML and HTML Parsing: Trees are used to represent the hierarchical
structure of XML and HTML documents.

7. Decision Trees: Trees are used in machine learning to represent decision


trees, which are used to classify data based on a set of features.

8. Game Development: Trees are used in game development to represent game


trees, which are used to implement artificial intelligence and decision-making
algorithms.
9. Network Topology: Trees are used to represent network topology in
computer networks, which helps in routing and network optimization.

10. Data Storage and Retrieval: Trees are used in data storage and retrieval
systems, such as B-trees and B+ trees, to efficiently store and retrieve large
amounts of data.

5.B+ Tree?

Key Characteristics of B+ Trees:

1. Self-balancing: B+ trees are self-balancing, meaning that the height of the


tree remains relatively constant even after insertions and deletions.

2. Multi-level indexing: B+ trees use a multi-level indexing system, where each


node represents a range of key values.

3. All data is stored in leaf nodes: In a B+ tree, all data is stored in the leaf
nodes, while internal nodes only store keys and pointers to child nodes.

4. Leaf nodes are linked: Leaf nodes in a B+ tree are linked together, allowing
for efficient sequential access to data.

Advantages of B+ Trees:

1. Efficient search: B+ trees allow for efficient search operations, with an


average time complexity of O(log n).

2. Efficient insertion and deletion: B+ trees allow for efficient insertion and
deletion operations, with an average time complexity of O(log n).

3. Good disk I/O performance: B+ trees are designed to minimize disk I/O
operations, making them suitable for use in databases and file systems.

Common Use Cases for B+ Trees:

1. Database indexing: B+ trees are commonly used in database indexing to


speed up query performance.

2. File systems: B+ trees are used in file systems to manage files and directories.

3. Storage systems: B+ trees are used in storage systems to manage data storage
and retrieval.
B+ Tree Operations:

1. Insertion: Inserting a new key-value pair into the B+ tree.

2. Deletion: Deleting a key-value pair from the B+ tree.

3. Search: Searching for a specific key in the B+ tree.

4. Range search: Searching for a range of keys in the B+ tree.

PART-C

1.Explain Binary Search Tree ADT?

A Binary Search Tree (BST) is a data structure in which each node has at most
two children (i.e., left child and right child) and each node represents a value.
Each node in the tree satisfies the following properties:

Properties of a Binary Search Tree:

1. Left subtree property: All the values in the left subtree of a node are less than
the value in the node.

2. Right subtree property: All the values in the right subtree of a node are
greater than the value in the node.

3. No duplicate values: BSTs typically do not allow duplicate values.

Operations on a Binary Search Tree:

1. Insert: Insert a new value into the BST.

2. Delete: Delete a value from the BST.

3. Search: Search for a value in the BST.

4. Traversal: Traverse the BST in a specific order (e.g., inorder, preorder,


postorder).

Advantages of Binary Search Trees:

1. Efficient search: BSTs allow for efficient search operations, with an average
time complexity of O(log n).

2. Efficient insertion and deletion: BSTs allow for efficient insertion and
deletion operations, with an average time complexity of O(log n).
Common Use Cases for Binary Search Trees:

1. Database indexing: BSTs can be used in database indexing to speed up query


performance.

2. File systems: BSTs can be used in file systems to manage files and
directories.

3. Compilers: BSTs can be used in compilers to manage symbol tables.

Types of Binary Search Trees:

1. AVL Tree: A self-balancing BST that ensures the height of the tree remains
relatively constant.

2. Red-Black Tree: A self-balancing BST that ensures the tree remains balanced
and search, insertion, and deletion operations are efficient.

Time Complexity of Binary Search Tree Operations:

1. Search: O(log n) on average, O(n) in the worst case.

2. Insert: O(log n) on average, O(n) in the worst case.

3. Delete: O(log n) on average, O(n) in the worst case.

2.Explain Threaded Binary Tree?

A threaded binary tree is a type of binary tree data structure that uses threads
(pointers) to connect nodes in a way that allows for efficient traversal and
searching.

In a threaded binary tree, each node has a pointer to its left and right children, as
well as additional pointers (threads) that connect nodes in a specific order.
These threads can be used to traverse the tree in a particular order, such as
inorder or preorder.

Types of Threaded Binary Trees:

1. Single-Threaded Binary Tree: Each node has a single thread that points to the
next node in a specific order (e.g., inorder).

2. Double-Threaded Binary Tree: Each node has two threads, one pointing to
the next node in the inorder traversal and another pointing to the previous node
in the inorder traversal.
Advantages of Threaded Binary Trees:

1. Efficient traversal: Threaded binary trees allow for efficient traversal of the
tree without the need for recursive function calls or a stack.

2. Faster search: Threaded binary trees can be used to implement fast search
algorithms, especially for finding the next or previous node in a specific order.

Disadvantages of Threaded Binary Trees:

1. Increased memory usage: Threaded binary trees require additional memory to


store the threads, which can be a disadvantage for large trees.

2. More complex implementation: Threaded binary trees are more complex to


implement than regular binary trees, especially when it comes to insertion and
deletion operations.

Use Cases for Threaded Binary Trees:

1. Database query optimization: Threaded binary trees can be used to optimize


database queries that require efficient traversal of large datasets.

2. Compilers: Threaded binary trees can be used in compilers to implement


efficient symbol table management.

3. Data compression: Threaded binary trees can be used in data compression


algorithms to efficiently traverse and manipulate compressed data.

Operations on Threaded Binary Trees:

1. Insertion: Inserting a new node into the threaded binary tree while
maintaining the threads.

2. Deletion: Deleting a node from the threaded binary tree while updating the
threads.

3. Traversal: Traversing the threaded binary tree using the threads to visit nodes
in a specific order

3.Explain B-Tree Structure?

A B-tree is a self-balancing search tree data structure that keeps data sorted and
allows search, sequential access, insert, and delete operations in logarithmic
time.
B-Tree Properties

1. Multi-level indexing: B-trees use a multi-level indexing system, where each


node represents a range of key values.

2. Balanced tree: B-trees are self-balancing, meaning that the height of the tree
remains relatively constant even after insertions and deletions.

3. All leaf nodes are at the same level: In a B-tree, all leaf nodes are at the same
level, which ensures that search, insertion, and deletion operations are efficient.

B-Tree Node Structure

Each node in a B-tree has the following structure:

- Keys: A node contains a list of keys that are used to guide the search.

- Pointers: A node contains pointers to its child nodes or to the data associated
with the keys.

B-Tree Diagram

Here's a simple diagram of a B-tree:

+---------------+

| Root |

| (Keys: 10, 20) |

+---------------+

/ \

+---------------+ +---------------+

| Child Node | | Child Node |

| (Keys: 5, 8) | | (Keys: 15, 25) |

+---------------+ +---------------+

/ \ / \
+---------------+ +---------------+ +---------------+ +---------------+

| Leaf Node | | Leaf Node | | Leaf Node | | Leaf Node |

| (Key: 3, 5) | | (Key: 8, 9) | | (Key: 15, 18) | | (Key: 25, 30) |

+---------------+ +---------------+ +---------------+ +---------------+

In this diagram, the root node has two keys (10 and 20) and two child nodes.
Each child node has its own keys and child nodes, and the leaf nodes contain the
actual data.

B-Tree Operations

1. Insertion: Inserting a new key into the B-tree involves finding the correct
location for the key and splitting nodes if necessary.

2. Deletion: Deleting a key from the B-tree involves finding the key and
merging nodes if necessary.

3. Search: Searching for a key in the B-tree involves traversing the tree from the
root node to the leaf node that contains the key.

Advantages of B-Trees

1. Efficient search: B-trees allow for efficient search operations, with an


average time complexity of O(log n).

2. Efficient insertion and deletion: B-trees allow for efficient insertion and
deletion operations, with an average time complexity of O(log n).

3. Balanced tree: B-trees are self-balancing, which ensures that search,


insertion, and deletion operations are efficient.

4.Explain AVL Tree?

An AVL tree is a self-balancing binary search tree data structure that ensures
the height of the tree remains relatively constant by rotating nodes when the
balance factor becomes too large.

Properties of AVL Trees


1. Balance Factor: The balance factor of a node is the difference between the
height of its left subtree and the height of its right subtree. In an AVL tree, the
balance factor of every node is either -1, 0, or 1.

2. Self-Balancing: AVL trees are self-balancing, meaning that the tree is


rebalanced after insertion or deletion operations to maintain the balance factor
property.

3. Binary Search Tree: AVL trees are binary search trees, meaning that the left
subtree of a node contains keys less than the node's key, and the right subtree
contains keys greater than the node's key.

AVL Tree Operations

1. Insertion: When a new node is inserted into an AVL tree, the balance factor
of the tree may become unbalanced. To restore balance, rotations are performed.

2. Deletion: When a node is deleted from an AVL tree, the balance factor of the
tree may become unbalanced. To restore balance, rotations are performed.

3. Search: Searching for a node in an AVL tree is similar to searching in a


binary search tree.

Rotations in AVL Trees

There are two types of rotations used in AVL trees:

1. Left Rotation: A left rotation is performed when the balance factor of a node
becomes greater than 1 and the right subtree is taller than the left subtree.

2. Right Rotation: A right rotation is performed when the balance factor of a


node becomes less than -1 and the left subtree is taller than the right subtree.

Advantages of AVL Trees

1. Guaranteed Search Time: AVL trees guarantee a search time of O(log n) in


the worst case, making them suitable for applications where search performance
is critical.

2. Efficient Insertion and Deletion: AVL trees allow for efficient insertion and
deletion operations, with an average time complexity of O(log n).
5.Explain Heap and its application?

A heap is a specialized tree-based data structure that satisfies the heap property.
It is a complete binary tree where each parent node is either greater than (max
heap) or less than (min heap) its child nodes.

Types of Heaps

1. Max Heap: In a max heap, the parent node is greater than its child nodes. The
root node is the maximum element in the heap.

2. Min Heap: In a min heap, the parent node is less than its child nodes. The
root node is the minimum element in the heap.

Heap Operations

1. Insert: Inserting a new element into the heap involves adding the element to
the end of the heap and then heapifying up to maintain the heap property.

2. Delete: Deleting an element from the heap involves removing the root node
and then heapifying down to maintain the heap property.

3. Extract: Extracting the maximum or minimum element from the heap


involves removing the root node.

Applications of Heaps

1. Priority Queues: Heaps are used to implement priority queues, where


elements are prioritized based on their values.

2. Heap Sort: Heaps are used in the heap sort algorithm, which is a comparison-
based sorting algorithm.

3. Graph Algorithms: Heaps are used in graph algorithms such as Dijkstra's


algorithm and Prim's algorithm.

4. Event Handling: Heaps are used in event handling systems to prioritize


events based on their timestamps.

5. Resource Allocation: Heaps are used in resource allocation systems to


allocate resources based on priority.
Advantages of Heaps

1. Efficient Insertion and Deletion: Heaps allow for efficient insertion and
deletion operations, with an average time complexity of O(log n).

2. Fast Extraction: Heaps allow for fast extraction of the maximum or minimum
element, with a time complexity of O(1).

Use Cases for Heaps

1. Operating System Scheduling: Heaps are used in operating system scheduling


to prioritize processes based on their priority levels.

2. Database Query Optimization: Heaps are used in database query optimization


to prioritize queries based on their complexity.

3. Real-Time Systems: Heaps are used in real-time systems to prioritize tasks


based on their deadlines.
DEPARTMENT OF COMPUTER APPLICATIONS
DATA STRUCTURE
BCA-II
UNIT-IV

By,
S.THARSANA
DEPARTMENT OF BCA
PART-A
1. What type of graph traversal visits nodes in a specific order?
A) Search B) Traversal C) DFS D) BFS
ANSWER:C

2. What is the term for a graph with weighted edges?

A) Simple Graph B) Weighted Graph C) Directed Graph

D) Undirected Graph

ANSWER:B

3. What type of graph has no cycles?

A) Cyclic Graph B) Acyclic Graph C) Directed Graph

D) Undirected Graph

ANSWER:B

4. What is the term for a graph with directed edges?

A) Undirected Graph B) Directed Graph C) Weighted Graph

D) Unweighted Graph

ANSWER:B

5. What algorithm is used to find the shortest path in a weighted graph?

A) DFS B) BFS C) Dijkstra's Algorithm

D) Bellman-Ford Algorithm

ANSWER:C

6. What is another term for a node in a graph?

A) Edge B) Vertex C) Path D) Cycle

ANSWER:B

7. What is the degree of a vertex in an undirected graph?


A) Number of incoming edges B) Number of outgoing edges

C) Total number of edges D) Number of edges incident on it

ANSWER:D

8.Eulerian Means
A) Tree B) Circuit C) Cluster D) Neuron

ANSWER:B

9.Degree Means
A) Odd
B) Even
C) Prime
D) Zero

ANSWER:B

10.Connected Means
A) Optional
B) Partial
C) Necessary
D) Irrelevant

ANSWER:C
PART-B

1.What is Graph in Data structure?

 Nodes (also called vertices)


 Edges (connections between the nodes)

 A graph is used to represent networks like social connections, roads,


web links, etc.

 Graphs can be:


 Directed (edges have direction)
 Undirected (edges don’t have direction)
 Weighted (edges have weights/costs)
 Unweighted

Example:

Let’s say we have cities connected by roads:

City A —— City B
| |
City C —— City D

Each city is a node, and the roads are edges.

Types of Graphs:

Undirected Graph: Edges don’t have direction.

Directed Graph (Digraph): Edges go one way (A → B).

Weighted Graph: Edges have values (e.g., distance, cost).

Unweighted Graph: All edges are considered equal.

2.Explain Representation of Graph?

Adjacency Matrix

A 2D array of size V × V (where V is the number of vertices), where the cell (i,
j) is:

 1 or the weight if there is an edge from vertex i to j


 0 if there is no edge
📘 Example:

Graph:

A—B

| |
C—D

Vertices: A, B, C, D
Adjacency Matrix (undirected, unweighted):

A B C D
A 0 1 1 0
B 1 0 0 1
C 1 0 0 1
D 0 1 1 0
Adjacency List

An array (or dictionary) where each element (vertex) points to a list of its
neighbors.

Example:

Graph:

A—B
| |
C—D

Adjacency List:

A → [B, C]
B → [A, D]
C → [A, D]
D → [B, C]

3.Types of Graphs?

1. Undirected Graph

 Edges have no direction


 If there is an edge between A and B, it can be traversed both ways
Example:

A —— B
\
C

Edge: (A, B) means A ↔ B

2. Directed Graph (Digraph)

 Edges have a direction (like one-way streets)


 Edge from A to B means A → B, but not B → A unless explicitly added

Example:

A→B


C
3. Weighted Graph

 Each edge has an associated weight/cost/value


 Common in routing, shortest path problems

Example:

A ——(5)—— B
\
(2)
\
C
4. Unweighted Graph

 All edges are considered equal


 No weights assigned

5. Cyclic Graph

 Contains at least one cycle (a path where the start and end vertex is the
same)

Example:A → B → C → A
4.Explain Topological sort?

Topological Sort is a linear ordering of vertices in a Directed Acyclic Graph


(DAG) such that for every directed edge u → v, vertex u comes before v in the
ordering.

When to Use:

 When tasks need to be done in a specific order


 Common in:
o Task scheduling
o Course prerequisite resolution
o Compilation order in build systems
o Dependency resolution in packages

Key Requirements:

 Graph must be directed


 Graph must be acyclic (no cycles)

Example:

Graph (DAG):

A→B→D
A→C→D

Topological Sort:
Possible orderings:

 A, B, C, D
 A, C, B, D

(both valid as A comes before B and C, and both B/C come before D)

How It Works (Two Main Algorithms):

1. Kahn’s Algorithm (BFS-Based)

Steps:

1. Compute in-degree of each node


2. Add all 0 in-degree nodes to a queue
3. While the queue is not empty:
o Remove a node, add to result
o Decrease in-degree of its neighbors
o If any neighbor’s in-degree becomes 0, add it to the queue

Time Complexity: O(V + E)

2. DFS-Based Topological Sort

Steps:

1. Mark all nodes as unvisited


2. For each unvisited node:
o Perform DFS
o After visiting all descendants, push node to a stack
3. Reverse the stack for the topological order

Time Complexity: O(V + E)

5.Explain Cut Vertex in data structure?

A cut vertex (or articulation point) is a vertex in an undirected graph whose


removal increases the number of connected components of the graph.

A cut vertex is a vertex such that:

 If you remove it (along with its connected edges), the graph becomes
disconnected, or more components are formed.

Example:

Graph:

A—B—C
|
D

 In this graph, B is a cut vertex


 If you remove B, the graph becomes disconnected:
A and C are no longer reachable from each other

Why Important?

Cut vertices are used in:

 Network reliability (finding single points of failure)


 Graph analysis
 Circuit design
 Communication routing

How to Find Cut Vertices:

1. Brute Force Method (Inefficient):

 Remove each vertex one by one and check if the graph becomes
disconnected
 Time Complexity: O(V * (V + E))

2. Efficient DFS-Based Algorithm: (Tarjan's Algorithm)

 Uses Depth First Search (DFS)


 Keeps track of:
o Discovery time
o Low value (earliest visited vertex reachable from subtree)
 A node u is a cut vertex if:
o u is the root of DFS and has two or more children
o OR there exists a child v such that low[v] ≥ disc[u]

Time Complexity: O(V + E)


PART-C

1.Explain Breadth first traversal in data structure?

Breadth First Traversal (also called BFS) is a graph traversal algorithm where
you explore all neighboring nodes level by level, starting from a selected node.

 Visit all neighbors of a node first, before moving on to their neighbors


 Uses a queue data structure (FIFO)

Works On:

 Graphs (directed or undirected)


 Trees (BFS is also known as level-order traversal for trees)

Example Graph:

A
/|\
B C D
|
E

BFS starting from A:


A→B→C→D→E

How BFS Works (Steps):

1. Start from a source node


2. Mark it as visited
3. Add it to a queue
4. While the queue is not empty:
o Remove the front node
o Visit all its unvisited neighbors
o Mark them as visited and add them to the queue

Python Code Example (for Graph BFS):

from collections import deque

def bfs(graph, start):


visited = set()
queue = deque([start])
visited.add(start)
while queue:
vertex = queue.popleft()
print(vertex, end=' ')

for neighbor in graph[vertex]:


if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)

# Example graph
graph = {
'A': ['B', 'C', 'D'],
'B': [],
'C': ['E'],
'D': [],
'E': []
}

bfs(graph, 'A') # Output: A B C D E

2.Explain Depth first traversal in data strucuture?

Depth First Traversal is a way to visit all the nodes (or vertices) in a graph or
tree by exploring as far as possible along each branch before backtracking.

How DFS Works:

1. Start from a selected node (often called the root in trees).


2. Visit the node.
3. Explore one of its unvisited neighbors by moving to that node.
4. Repeat the process until you reach a node with no unvisited neighbors.
5. Backtrack to the previous node and explore other neighbors.
6. Continue until all reachable nodes are visited.

DFS in Trees

 Preorder traversal: Visit node → Visit left subtree → Visit right subtree
 Inorder traversal: Visit left subtree → Visit node → Visit right subtree
(for binary trees)
 Postorder traversal: Visit left subtree → Visit right subtree → Visit
node

DFS in Graphs
Graphs may have cycles, so you need to keep track of visited nodes to avoid
infinite loops.

DFS Implementation (Graph)

Here is a simple example of DFS using recursion in Python for a graph


represented with adjacency lists:

def dfs(graph, node, visited):


if node not in visited:
print(node, end=' ')
visited.add(node)
for neighbor in graph[node]:
dfs(graph, neighbor, visited)

# Example graph as adjacency list


graph = {
'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F'],
'D': [],
'E': ['F'],
'F': []
}

visited = set()
dfs(graph, 'A', visited)

Output:

ABDEFC

3.Explain Bi connectivity in data structure?

A graph (usually an undirected graph) is said to be biconnected if it remains


connected after the removal of any single vertex. In other words, there are no
vertices whose removal disconnects the graph.

 Vertices whose removal disconnects the graph are called articulation


points or cut vertices.
 A biconnected graph has no articulation points.
 Biconnectivity is useful in network reliability, circuit design, and
understanding the robustness of connections.
Key Terms

 Articulation Point (Cut Vertex): A vertex in a graph that, if removed


along with its edges, increases the number of connected components (i.e.,
breaks the graph into disconnected parts).
 Biconnected Component (Block): A maximal subgraph that is
biconnected.
 Bridge (Cut Edge): An edge whose removal increases the number of
connected components.

Example

Consider a graph with vertices A, B, C, D, and edges:

 A-B
 B-C
 C-D
 B-D

This graph is biconnected because removing any single vertex still keeps the
remaining graph connected.

How to Check Bi-connectivity?

A common approach uses Depth-First Search (DFS) with the following ideas:

 Track discovery times of each vertex (the time when it was first visited
in DFS).
 Track the lowest discovery time reachable from the subtree rooted at a
vertex (called low value).
 If a vertex u has a child v such that low[v] >= disc[u], then u is an
articulation point.
 The root of DFS tree is an articulation point if it has more than one child.

This is the basis of Tarjan's Algorithm for articulation points and biconnected
components.

4.Explain Euler circuit in data structure?


An Euler Circuit (or Eulerian Circuit) in a graph is a closed trail that uses
every edge exactly once and starts and ends at the same vertex.

 It traverses every edge of the graph once and only once.


 The path is closed (starts and ends at the same vertex).

 Euler Path: A trail in a graph which uses every edge exactly once but
does not need to start and end at the same vertex.
 Euler Circuit: An Euler path that starts and ends on the same vertex.

Conditions for Euler Circuit

For an undirected graph:

 The graph must be connected (except for isolated vertices).


 Every vertex must have an even degree (an even number of edges
coming out).

For a directed graph:

 The graph must be strongly connected (every vertex is reachable from


every other vertex).
 Every vertex must have equal in-degree and out-degree.
 Used in problems like the "Seven Bridges of Königsberg".
 Helps in routing, DNA sequencing, and network design.

Example

Consider a graph with vertices A, B, C, D and edges:

 A-B
 B-C
 C-D
 D-A
 B-D
 A-C
 Each vertex has an even degree (2 or 4), and the graph is connected, so it
has an Euler Circuit.

5.Explain Applications of Graphs?


1.Social Networks

 Represent users as nodes and relationships (friendships, followers) as


edges.
 Analyze connectivity, find shortest paths (e.g., degrees of separation),
recommend friends, detect communities.

2. Web Page Ranking (Search Engines)

 Represent web pages as nodes and hyperlinks as directed edges.


 Algorithms like PageRank use graph structures to rank the importance of
web pages.

3. Routing and Navigation Systems

 Model cities or locations as nodes and roads or paths as edges.


 Use shortest path algorithms (Dijkstra, A*) for GPS navigation and
network routing.

4. Network Topology

 Represent computer or communication networks.


 Analyze connectivity, detect faults, optimize network design.

5. Recommendation Systems

 Model users and items (movies, products) as bipartite graphs.


 Use graph traversal and similarity measures to recommend products or
content.

6. Dependency Management

 Represent tasks or modules as nodes and dependencies as edges.


 Useful in project scheduling, build systems, package managers (detecting
cycles, order of execution).

DEPARTMENT OF COMPUTER APPLICATIONS


DATA STRUCTURE
BCA-II
UNIT-V

By,
S.THARSANA
DEPARTMENT OF BCA

PART-A
1.Linear search is:
a) Divide and conquer
b) Sequential scan
c) Hashing
d) Sorting

ANSWER:B

2.Binary search requires the array to be:


a) Unsorted
b) Sorted
c) Circular
d) Linked

ANSWER:B

3. DFS stands for:


a) Directed Find Search
b) Depth First Search
c) Data Flow Scan
d) Double Fast Search

ANSWER:B

4.BFS is used in:


a) Arrays
b) Trees/Graphs
c) Hash tables
d) Stacks

ANSWER:B

5.Which data structure is best for fast lookup?


a) Stack
b) Queue
c) Hash table
d) Linked list

ANSWER:C

6. Searching in a sorted linked list is:


a) O(1)
b) O(log n)
c) O(n)
d) O(n log n)

ANSWER:C

7.What algorithm uses pivot element?


a) Binary search
b) Quick sort
c) Linear search
d) Heap sort

ANSWER:B

8. Interpolation search works best on:


a) Random data
b) Uniformly distributed data
c) Unsorted data
d) Linked list

ANSWER:B

9.The worst-case time complexity of linear search is:


a) O(log n)
b) O(n)
c) O(1)
d) O(n²)

ANSWER:B

10.In a hash table, searching depends on:


a) Collision resolution
b) Sorting
c) Recursion
d) Tree traversal

ANSWER:A

PART-B

1.Explain Searching in data structure?


Searching is the process of finding the location of a specific element (called the
target) within a data structure, such as an array, linked list, tree, or graph.

Why is Searching Important?

 It helps retrieve data quickly.


 Essential for operations like database queries, finding values in
collections, and many algorithms.

Types of Searching Methods

1. Linear Search

 How it works: Checks each element one by one from the start until the
target is found or the list ends.
 Applicable on: Both sorted and unsorted data.
 Time complexity: O(n) — where n is the number of elements.
 Example: Searching for a name in an unsorted list.

2. Binary Search

 How it works: Works on sorted arrays or lists. Repeatedly divides the


search interval in half:
o Compare the target with the middle element.
o If target is smaller, continue search in the left half.
o If larger, continue in the right half.
 Applicable on: Sorted data only.
 Time complexity: O(log n) — much faster than linear search.
 Example: Looking up a word in a dictionary.

Searching in Other Data Structures

3. Search in Linked List

 Similar to linear search; traverse nodes one by one.


 Time complexity: O(n).

4. Search in Trees

 In Binary Search Trees (BSTs), search time is O(h) where h is the height.
 Traverses left or right child depending on value comparison.
 In balanced trees, this is efficient (O(log n)).

5. Search in Graphs
 Use algorithms like Depth First Search (DFS) or Breadth First Search
(BFS) to find nodes or paths.
 Useful for connectivity and pathfinding.

2.Explain linear search in data structure?

Linear Search is the simplest searching algorithm. It checks every element in a


list or array sequentially until the target element is found or the entire list has
been checked.

How Does Linear Search Work?

1. Start from the first element of the list.


2. Compare the current element with the target value.
3. If it matches, return the index (or position).
4. If not, move to the next element.
5. Repeat steps 2-4 until the target is found or the list ends.
6. If the target is not found, return a special value (like -1 or null) indicating
"not found."

Example

Suppose you want to find the number 7 in this array:

[3, 5, 2, 7, 9]

 Check 3 → no match
 Check 5 → no match
 Check 2 → no match
 Check 7 → match found! Return index 3

Characteristics of Linear Search

 Data requirement: Works on both sorted and unsorted lists.


 Time complexity: O(n), where n is the number of elements.
 Space complexity: O(1) (no extra space needed).
 Advantages: Simple and easy to implement.

3.Explain Binary Search in data structure?


Binary Search is an efficient searching algorithm used to find the position of a
target value within a sorted array or list. It repeatedly divides the search interval
in half, reducing the search space quickly.

How Does Binary Search Work?

1. Start with two pointers: low (start of the list) and high (end of the list).
2. Find the middle element: mid = (low + high) // 2.
3. Compare the middle element with the target:
o If they are equal, return the middle index.
o If the target is less than the middle element, repeat the search in the
left half (from low to mid-1).
o If the target is greater, repeat in the right half (from mid+1 to high).
4. Repeat steps 2-3 until the target is found or the search space is empty
(low > high).
5. If the target is not found, return -1 (or indicate not found).

Example

Find the number 7 in the sorted array:

[1, 3, 5, 7, 9, 11]

 low = 0, high = 5, mid = (0+5)//2 = 2 → element = 5


Target (7) > 5 → search right half: low = 3, high = 5
 low = 3, high = 5, mid = (3+5)//2 = 4 → element = 9
Target (7) < 9 → search left half: low = 3, high = 3
 low = 3, high = 3, mid = 3 → element = 7 → found at index 3

Characteristics of Binary Search

 Data requirement: The array or list must be sorted.


 Time complexity: O(log n) — much faster than linear search.
 Space complexity: O(1) for iterative implementation; O(log n) for
recursive.
 Advantages: Efficient for large datasets.
 Disadvantages: Requires sorted data.

Pseudocode for Binary Search (Iterative)

function binarySearch(array, target):


low = 0
high = length(array) - 1
while low <= high:
mid = (low + high) // 2
if array[mid] == target:
return mid
else if target < array[mid]:
high = mid - 1
else:
low = mid + 1

return -1 // target not found

4.Explain sorting in data structure?

Sorting is the process of arranging data in a specific order, typically ascending


(smallest to largest) or descending (largest to smallest). It organizes data to
make searching, analysis, and processing more efficient.

 Faster Searching: Many search algorithms like binary search require


sorted data.
 Data Organization: Easier to understand and analyze data.
 Optimization: Improves performance in various algorithms and
operations.
 Applications: Used in databases, file systems, and more.

Types of Sorting

1. Comparison-based Sorting: Sort by comparing elements


Examples:
o Bubble Sort
o Selection Sort
o Insertion Sort
o Merge Sort
o Quick Sort
o Heap Sort
2. Non-comparison-based Sorting: Sort without comparing elements
Examples:
o Counting Sort
o Radix Sort
o Bucket Sort
o

5.Explain Bubble sort?


Bubble Sort is one of the simplest sorting algorithms. It works by repeatedly
swapping adjacent elements if they are in the wrong order. This process is
repeated until the entire list is sorted.

How Does Bubble Sort Work?

1. Start from the beginning of the array.


2. Compare the current element with the next element.
3. If the current element is greater than the next element, swap them.
4. Move to the next element and repeat steps 2-3 until the end of the array.
5. Repeat the entire process for all elements until no swaps are needed
(which means the array is sorted).

Example

Sort the array: [5, 3, 8, 4, 2]

 First pass:
Compare 5 and 3 → swap → [3, 5, 8, 4, 2]
Compare 5 and 8 → no swap
Compare 8 and 4 → swap → [3, 5, 4, 8, 2]
Compare 8 and 2 → swap → [3, 5, 4, 2, 8]
 Second pass:
Compare 3 and 5 → no swap
Compare 5 and 4 → swap → [3, 4, 5, 2, 8]
Compare 5 and 2 → swap → [3, 4, 2, 5, 8]
 Third pass:
Compare 3 and 4 → no swap
Compare 4 and 2 → swap → [3, 2, 4, 5, 8]
 Fourth pass:
Compare 3 and 2 → swap → [2, 3, 4, 5, 8]
 No more swaps → array is sorted.

PART-C
1.Explain Selection Sort?

Selection Sort is a simple comparison-based sorting algorithm in data


structures. It works by repeatedly selecting the smallest (or largest, depending
on the order) element from the unsorted part of the list and moving it to the
beginning (or end) of the sorted part.

How Selection Sort Works (Step-by-Step)

1. Start from the first element in the list.


2. Find the smallest element in the unsorted portion of the list.
3. Swap it with the first unsorted element.
4. Move the boundary between sorted and unsorted portions one step
forward.
5. Repeat until the list is fully sorted.

Example

Let's say we want to sort the array:

[64, 25, 12, 22, 11]

Step-by-step:

 Pass 1: Find the smallest (11), swap with 64 → [11, 25, 12, 22, 64]
 Pass 2: Find the smallest in remaining (12), swap with 25 → [11, 12, 25,
22, 64]
 Pass 3: Find the smallest (22), swap with 25 → [11, 12, 22, 25, 64]
 Pass 4: Already sorted → [11, 12, 22, 25, 64]

2.Explain Insertion sort in data structure?

Insertion sort builds the final sorted array one item at a time. It takes one
element from the unsorted part and places it in the correct position in the sorted
part.

How It Works (Step-by-Step)

1. Start from the second element (index 1), assuming the first element is
already "sorted".
2. Compare the current element with the elements before it.
3. Shift all larger elements one position to the right.
4. Insert the current element into its correct position.
5. Repeat until the array is sorted.
Example

Sort this array:

[5, 3, 8, 6, 2]

Pass-by-pass:

 Pass 1: Insert 3 into [5] → [3, 5, 8, 6, 2]


 Pass 2: Insert 8 into [3, 5] → [3, 5, 8, 6, 2] (already in place)
 Pass 3: Insert 6 into [3, 5, 8] → [3, 5, 6, 8, 2]
 Pass 4: Insert 2 into [3, 5, 6, 8] → [2, 3, 5, 6, 8]

3.Explain Shell and Radix Sort?

Shell Sort is an improved version of Insertion Sort that allows the exchange
of far-apart elements. It reduces the total number of movements by first sorting
elements far apart and then gradually reducing the gap.

How It Works:

1. Choose a gap sequence (e.g., n/2, n/4, ..., 1).


2. For each gap, perform gapped insertion sort:
o Compare elements that are "gap" distance apart.
o Sort them as if using insertion sort.
3. Reduce the gap and repeat.
4. When the gap becomes 1, it's a standard insertion sort – but the list is
almost sorted by then.
5. Example:

Sort [19, 22, 63, 105, 2, 46]

 Start with gap = 3 → compare and sort elements 3 apart:


o (19, 105), (22, 2), (63, 46)
 Then gap = 1 → normal insertion sort on the nearly sorted list.

4.Explain Hash Function?


A hash function is a function used to map data of arbitrary size to a fixed
size — usually to store, search, or retrieve data efficiently in a hash table.

What is a Hash Function?

A hash function (H) takes an input (or key) and returns an integer (called a
hash code, hash value, or index) that determines where the data should be
stored in a hash table.

H(key)=indexH(key) = indexH(key)=index

Why Use Hash Functions?

Hash functions are central to:

 🔍 Quick data lookup (average case O(1) time)


 📥 Efficient insertion and deletion
 ⚙️Hash tables, hash maps, and sets

Example

Let’s say we have a hash table of size 10.

We use a hash function:

H(key)=keymod 10H(key) = key \mod 10H(key)=keymod10

Now insert key = 27:

H(27)=27mod 10=7H(27) = 27 \mod 10 = 7H(27)=27mod10=7

So, we store 27 at index 7 in the hash table.

Properties of a Good Hash Function

1. Deterministic: Same input → same output


2. Uniformity: Distributes keys evenly across the table
3. Fast to compute
4. Minimize collisions (different keys getting the same hash)

5.Explain Open Addressing?


Open addressing is a collision resolution technique used in hash tables. When
two keys hash to the same index (a collision), open addressing finds another
open slot within the hash table to store the new key, rather than using a separate
data structure (like in chaining).

In open addressing, all elements are stored directly in the hash table array
itself. If the intended spot is taken, a probing sequence is used to find the next
available slot.

How It Works

1. Hash the key to find the initial index.


2. If the slot is empty, insert the key there.
3. If the slot is occupied, use a probing method to find the next available
slot.

Probing Techniques

1. Linear Probing
o Check the next slot:
index = (hash + i) % table_size
o Pros: Simple to implement
o Cons: Prone to clustering (long runs of occupied slots)
2. Quadratic Probing
o Check slots with a quadratic function:
index = (hash + c1*i + c2*i^2) % table_size
o Reduces clustering compared to linear
3. Double Hashing
o Use a second hash function:
index = (hash1 + i * hash2(key)) % table_size
o Very effective at avoiding clustering

You might also like