0% found this document useful (0 votes)
57 views112 pages

Dsa Book & Playlists .Debkanta

The document outlines the syllabus for Data Structures and Algorithms (DSA) according to Makaut 2019, covering basic terminologies, data organizations, and types of data structures. It explains operations such as insertion, deletion, and searching, as well as algorithm analysis, including time and space complexity. Additionally, it introduces abstract data types and their significance in programming and algorithm design.

Uploaded by

indrahulray
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views112 pages

Dsa Book & Playlists .Debkanta

The document outlines the syllabus for Data Structures and Algorithms (DSA) according to Makaut 2019, covering basic terminologies, data organizations, and types of data structures. It explains operations such as insertion, deletion, and searching, as well as algorithm analysis, including time and space complexity. Additionally, it introduces abstract data types and their significance in programming and algorithm design.

Uploaded by

indrahulray
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 112

T

BU
MAKAUT
W

DSA BOOK AND PLAYLISTS

According to Makaut 2019 latest syllabus

DEBKANTA MAITY [BBIT]


[ ADMIN OF MAKAUT UNOFFICIAL COMMUNITY ]

debkanta_maity_ 8372021415
Introduction: Basic Terminologies: Elementary Data Organizations, Data Structure
Operations: insertion, deletion, traversal etc.; Analysis of an Algorithm, Asymptotic
Notations, Time-Space trade o . Searching: Linear Search and Binary Search
Technique sand their complexity analysis.

In oduc on: Basic Terminologies: Elementary Data Organiza ons


tr
ti
ff
ti
Elementary Data Organizations
Elementary Data Organizations describe the basic structure and layout of data in memory or
storage.

🔹 1. Linear Data Organization

• De nition: Data elements are arranged in a linear sequence.

• Examples:

◦ Array: Index-based xed structure.

◦ Linked List: Dynamically connected nodes.

◦ Stack: Follows Last In First Out (LIFO).

◦ Queue: Follows First In First Out (FIFO).

🔹 2. Non-Linear Data Organization

• De nition: Data elements are connected in hierarchical or networked structure.

• Examples:

◦ Tree: Parent-child relationship (e.g., Binary Tree).

◦ Graph: Nodes connected via edges (e.g., Social networks).


fi
fi
fi
🔹 3. Static Data Organization

• De nition: Memory size is xed during compile time.

• Examples:

◦ Arrays with prede ned size.

🔹 4. Dynamic Data Organization

• De nition: Memory is allocated and freed during runtime.

• Examples:

◦ Linked Lists, Dynamic Queues, Trees.

various data pes

🔹 1. Primitive Data Types

These are the basic data types provided by most programming languages.

• int – stores integers (e.g., 1, -100, 2024).

• oat / double – stores oating-point numbers (e.g., 3.14, -0.001).

• char – stores single characters (e.g., 'A', 'z').

• bool – stores Boolean values (true or false).

• void – indicates no value or return type in functions.

🔹 2. Derived Data Types

These are formed using primitive types.

• Array – collection of elements of the same type (e.g., int arr[5]).

• Pointer – stores the address of another variable (e.g., int *ptr).

• Function – group of statements that perform a task and may return a value.
fl
fi
fi
ty
fi
fl
fi
🔹 3. Abstract Data Types (ADT)

These de ne what operations can be performed, not how they're implemented.

• List – ordered collection of elements (e.g., array list, linked list).

• Stack – LIFO structure (Last-In, First-Out).

• Queue – FIFO structure (First-In, First-Out).

• Deque – Double-ended queue (insert/delete from both ends).

• Tree – hierarchical data structure (e.g., binary tree, AVL tree).

• Graph – collection of nodes (vertices) connected by edges.

• Hash Table – key-value pair storage with fast lookup.

🔹 4. User-de ned Data Types

Programmer de nes these using built-in types.

• Structure (struct) – groups different types of variables (e.g., struct Student).

• Union – similar to struct but uses shared memory for all members.

• Class – blueprint for creating objects (used in object-oriented programming).

• Enumeration (enum) – set of named integer constants.

🔹 5. Special Data Types (language-dependent)

• String – sequence of characters, often treated as a data type in many languages.

• Tuple (in Python) – immutable ordered collection of elements.

• Set – collection of unique elements.


fi
fi
fi
wri a short no on abs act data pe

1. De nition:
An Abstract Data Type (ADT) is a theoretical concept in which only the behavior
(operations) is de ned, not the implementation details.

2. Purpose:
ADTs are used to organize and manage data in a structured way, focusing on what
operations can be done rather than how they are done.

3. Abstraction:
ADTs provide data abstraction by hiding the implementation details from the user.

4. Operations:
Each ADT supports a speci c set of operations, such as insertion, deletion, search, etc.

5. Implementation:
ADTs can be implemented using arrays, linked lists, pointers, or other low-level data
structures.

6. Examples of ADTs:
◦ List – ordered collection of elements

◦ Stack – LIFO structure (push, pop)

◦ Queue – FIFO structure (enqueue, dequeue)

◦ Deque – Double-ended queue

◦ Tree – Hierarchical structure (binary tree, BST)

◦ Graph – Collection of nodes and edges

◦ Set – Collection of unique elements

◦ Map – Key-value pair collection (also called dictionary or hash map)

7. Advantages:
◦ ✅ Promotes code modularity and reuse

◦ ✅ Supports encapsulation and abstraction

◦ ✅ Makes code easier to maintain and debug

8. Use in DSA:
ADTs are fundamental in algorithm design and form the basis of most data structures used
in computer science.
fi
te
fi
te
fi
tr
ty
Data s ucture
De nition:
A Data Structure is a way of organizing and storing data in a computer so that it can be
accessed and modi ed ef ciently.

Purpose:
Data structures help in ef cient data processing, easy access, and management of large
amounts of data.

Types of Data Structures:

🔹 Linear Data Structures (elements arranged in a sequence)

• Array

• Linked List

• Stack

• Queue

🔹 Non-Linear Data Structures (elements arranged hierarchically or in a graph)

• Tree (e.g., Binary Tree, AVL Tree)

• Graph

Operations on Data Structures:

• Insertion – Add new elements

• Deletion – Remove elements

• Traversal – Visit all elements

• Searching – Find an element

• Sorting – Arrange elements in order

• Updation – Modify existing data

Classi cation:
fi
fi
tr
fi
fi
fi
Importance in DSA:

• Forms the foundation of algorithms.

• Helps in optimal use of memory and time.

• Essential for solving complex programming problems.

Real-Life Examples:

• Array – Student marks list

• Stack – Undo feature in editor

• Queue – Ticket booking system

• Tree – File system hierarchy

• Graph – Social network connections

Choice of Data Structure Depends On:

• Type of data

• Required operations (search, insert, delete)

• Memory and speed constraints

Bene ts:

• ✅ Enhances program ef ciency

• ✅ Improves data management

• ✅ Supports algorithm design


fi
fi
Data s ucture opera on

🔸 1. Traversal

• De nition: Visiting each element of the data structure exactly once to perform some
operation (like display, search, etc.).

• Example: Printing all elements in an array or traversing a linked list or tree.

• Use Case: Displaying all elements, debugging, searching.

🔸 2. Insertion

• De nition: Adding a new element to the data structure at a speci c location.

• Example:

◦ Inserting at the end of an array.

◦ Inserting a node in a binary search tree (BST).

• Use Case: Updating data, adding new records.

🔸 3. Deletion

• De nition: Removing an existing element from the data structure.

• Example:

◦ Removing an element from a queue.

◦ Deleting a node from a linked list or BST.

• Use Case: Freeing memory, removing obsolete data.

🔸 4. Searching

• De nition: Finding the location of an element in the data structure.

• Types:

◦ Linear Search (O(n))

◦ Binary Search (O(log n), for sorted data)

• Use Case: Looking up user data in a database.


fi
fi
fi
fi
tr
ti
fi
🔸 5. Sorting

• De nition: Arranging the data elements in a speci c order (ascending/descending).

• Common Algorithms: Bubble sort, Quick sort, Merge sort, Insertion sort.

• Use Case: Improving search ef ciency, data presentation.

🔸 6. Updating (Modi cation)

• De nition: Changing the value of an existing element.

• Example: Modifying the value at a particular index in an array.

• Use Case: Correcting or updating stored data.

🔸 7. Merging

• De nition: Combining two similar data structures into one.

• Example: Merging two sorted arrays or linked lists.

• Use Case: Data consolidation or combining datasets.

🔸 8. Access (Retrieve)

• De nition: Accessing elements by their index or reference.

• Example: Retrieving the 5th element in an array.

• Use Case: Fast data lookup.


fi
fi
fi
fi
fi
fi
fi
Analysis of an Algorithm
What is an algori m ? Charac ris c of an algori m ?

🔹 What is an Algorithm?

An algorithm is a nite set of well-de ned instructions or steps designed to perform a speci c
task or solve a particular problem.

• It takes input, processes it, and produces output.

• It is independent of programming languages.

• Example: A recipe for cooking is an algorithm.

🔸 Characteristics of an Algorithm :

1. Input
◦ An algorithm should have zero or more inputs.

◦ These are values supplied externally before or during execution.

◦ 👉 Example: Numbers to be sorted.

2. Output
◦ An algorithm must produce at least one output.

◦ Output is the result of the input after processing.

◦ 👉 Example: Sorted list.

3. Finiteness
◦ The algorithm must terminate after a nite number of steps.

◦ It should not run inde nitely.

◦ 👉 Ensures the solution completes.

4. De niteness (Clarity)
◦ Each step should be clearly and unambiguously de ned.

◦ No confusion or multiple meanings.

◦ 👉 Example: “Add 1 to x” is clear.


fi
fi
fi
th
fi
fi
te
ti
fi
th
fi
5. Effectiveness
◦ All operations must be basic enough to be carried out by a person or machine easily.

◦ The steps should be feasible and practical.

◦ 👉 Use simple calculations and instructions.

6. Generality
◦ The algorithm should be applicable to a class of problems, not just one instance.

◦ 👉 Example: An algorithm to sort any list of numbers.

What are e measures of performance of an algori m ?

🔹 Measures of Performance of an Algorithm

To evaluate how ef cient and effective an algorithm is, we use performance measures. These
help determine how well an algorithm performs in terms of time and space.

🔸 1. Time Complexity

• De nition: Time complexity refers to the amount of time an algorithm takes to complete as
a function of the input size (n).

• Measured As: Number of basic operations or steps.

• Common Notations:

◦ Best case: Ω(n)

◦ Average case: Θ(n)

◦ Worst case: O(n)

• Example:

◦ Linear search → O(n)

◦ Binary search → O(log n)

◦ Bubble sort → O(n²)


fi
th
fi
th
🔸 2. Space Complexity

• De nition: Space complexity is the amount of memory an algorithm uses during


execution.

• Includes:

◦ Input space

◦ Auxiliary space (temporary variables, stack, etc.)

• Measured in: Bytes or number of memory locations.

• Example:

◦ An algorithm using one array and few variables → O(n)

🔸 3. Correctness

• De nition: The algorithm should provide the correct output for all valid inputs.

• Goal: Must work for all edge cases, not just typical inputs.

• Example: Sorting algorithm must always return a fully sorted list.

🔸 4. Robustness

• De nition: The algorithm should behave correctly even for unexpected or incorrect
inputs.

• Example: Handling division by zero, null values, or invalid user input.

🔸 5. Scalability

• De nition: Describes how well the algorithm handles increasing input size.

• Good Scalability: If performance degrades slowly with input size.

• Example: O(n log n) algorithm scales better than O(n²).

🔸 6. Generality

• De nition: The algorithm should work for a wide range of inputs or problems, not just
one speci c case.

• Example: A sorting algorithm should work for integers, oats, or strings.


fi
fi
fi
fi
fi
fi
fl
Asymptotic notation

🔹 Asymptotic Notation – Explained

Asymptotic notation is a mathematical tool used to describe the ef ciency (performance) of


an algorithm in terms of time or space, as the input size n becomes very large.

🔸 Why Use Asymptotic Notation?

• To analyze algorithm performance independently of hardware/software.

• To focus on the growth rate rather than exact execution time.

• Useful for worst-case, best-case, and average-case analysis.

🔸 Types of Asymptotic Notation:

1. Big O Notation – O(f(n))

• Represents: Worst-case time complexity.

• Meaning: Maximum time the algorithm will take for any input of size n.

• Used When: We want an upper bound.

• Example:

◦ Linear Search: O(n) → time increases linearly with input.

◦ Bubble Sort: O(n²) → nested loops cause quadratic time.

2. Omega Notation – Ω(f(n))

• Represents: Best-case time complexity.

• Meaning: Minimum time the algorithm takes under ideal conditions.

• Used When: We want a lower bound.

• Example:

◦ Linear Search (Best case): Ω(1) → item is found at rst position.


fi
fi
3. Theta Notation – Θ(f(n))

• Represents: Average-case or exact bound.

• Meaning: The algorithm takes roughly the same time in most cases.

• Used When: Best and worst cases grow at the same rate.

• Example:

◦ Merge Sort: Θ(n log n) → consistent performance.

Big O Notation – O(f(n))


De nition:

• Big O notation is a mathematical representation used to describe the upper bound or


worst-case performance of an algorithm in terms of time or space required.

Mathematical Meaning:

• Let f(n) and g(n) be functions.

• We write:
f(n) = O(g(n)),
if there are constants c > 0 and n₀ such that for all n ≥ n₀,
f(n) ≤ c × g(n)

• It means f(n) grows no faster than g(n) after a certain point.

Purpose:

• To analyze and compare algorithm ef ciency independent of hardware.

• Helps estimate how an algorithm will scale with input size.

Example:

• Consider a linear search in an array of size n.

• It may take up to n steps in the worst case →


Time Complexity: O(n)

Real-Life Analogy:

• Imagine a person searching for a name in a phone book:

◦ If the book is unsorted, they may go page by page → O(n)

◦ If it’s sorted alphabetically, they can jump to the middle (like binary search) →
O(log n)
fi
fi
Importance:

• Essential for choosing the best algorithm for a task.

• Ensures software runs ef ciently even on large datasets.

• Used in interviews, competitive coding, and system design.

Common Time Complexities (Examples):


fi
Omega Notation – Ω(f(n))
De nition:

• Omega (Ω) notation describes the best-case time or space complexity of an algorithm.

• It represents the lower bound, i.e., the minimum time an algorithm will take, regardless of
the input.

Mathematical Meaning:

• Let f(n) and g(n) be functions.

• We say:
f(n) = Ω(g(n)),
if there are constants c > 0 and n₀, such that for all n ≥ n₀,
f(n) ≥ c × g(n)

• This means f(n) grows at least as fast as g(n) for large n.

Purpose:

• To provide a guaranteed minimum performance level.

• Helps in understanding the best-case ef ciency of an algorithm.

Example:

• Consider linear search in an array:

◦ Best case: the element is at the rst position → only 1 comparison.

◦ So, Ω(1) time in the best case.

Real-Life Analogy:

• Suppose you're searching for your roll number in a stack of papers:

◦ Best case: it's on top (1 second) → Ω(1)

◦ Worst case: it’s at the bottom → O(n)

Importance:

• Omega helps in setting realistic expectations.

• Especially useful when analyzing optimistic performance scenarios.

• Important in theoretical analysis and algorithm validation.


fi
fi
fi
Common Time Complexities (Examples):
Theta Notation – Θ(f(n))

De nition:

• Theta (Θ) notation gives the tight bound of an algorithm’s performance.

• It represents both the best-case and worst-case bounds when they are the same — i.e.,
exact time/space complexity.

Mathematical Meaning:

• Let f(n) and g(n) be functions.

• We say:
f(n) = Θ(g(n)) if there exist positive constants c₁, c₂, and n₀ such that for all n ≥ n₀:
c₁ × g(n) ≤ f(n) ≤ c₂ × g(n)

• This means f(n) grows exactly at the rate of g(n), for large inputs.

Purpose:

• To describe the exact time or space complexity of an algorithm.

• Helps in determining the most accurate performance behavior.

Example:

• Consider merge sort:

◦ In all cases (best, average, worst), it takes n log n time.

◦ So, its complexity is Θ(n log n)

Real-Life Analogy:

• Suppose a delivery person always takes 10 minutes per delivery (regardless of address
or time):

◦ Exact time = Θ(10 min) → constant, predictable

• Similarly, an algorithm that always takes the same number of steps for size n has Θ-
complexity.

Importance:

• Used to give a complete picture of algorithm performance.

• Especially important when best-case and worst-case match.

• Provides better understanding for algorithm optimization and comparison.


fi
Common Time Complexities (Examples):
Difference between linear and non linear data structure
Time space trade off

1. De nition:
◦ Time-Space Trade-off is a computing principle where we exchange memory
(space) for faster execution (time) or vice versa.

2. Concept:
◦ To make a program run faster, we may use more memory (e.g., by storing
precomputed results).

◦ To save memory, we may allow the program to run slower (e.g., by recalculating
values when needed).

3. Purpose:
◦ Helps optimize algorithms based on available resources (e.g., limited memory or
need for speed).

◦ Important in embedded systems, mobile apps, and high-performance


computing.

4. Example:
◦ Memoization in dynamic programming:

▪ Saves previous results to avoid recomputation → uses more space to save


time.

◦ Data compression:

▪ Uses more time to compress/decompress data → saves space.

5. Real-Life Analogy:
◦ Imagine using a shortcut road (needs fuel = space) to save time.

◦ Or walking to save fuel (space) but take more time.

6. Importance:
◦ Essential in algorithm design, system optimization, and resource-limited
environments.

◦ Helps balance between execution speed and memory usage.

7. Applications:
◦ Caching, look-up tables, compression algorithms, embedded systems, and game
development.
fi
Searching: Linear Search and Binary Search Technique
sand their complexity analysis.
Linear search
✅ 1. De nition:

• Linear search is a simple searching algorithm in which each element of a list or array is
checked one by one until the desired element is found or the list ends.

✅ 2. Steps:

1. Start from the rst element of the array.

2. Compare each element with the target (search key).

3. If a match is found, return the index.

4. If the end of the list is reached and no match is found, return "Not Found".

✅ 3. Uses / Applications:

• Works with unsorted arrays or lists.

• Suitable for small datasets.

• Used in searching records in small les, address books, etc.

✅ 4. Time Complexities:

✅ 5. Space Complexity:

• O(1) → Constant space (no extra space needed apart from a few variables).
fi
fi
fi
✅ 6. Advantages:

• ✅ Simple to implement and understand.

• ✅ Works on both sorted and unsorted data.

• ✅ Does not require extra memory.

✅ 7. Disadvantages:

• ❌ Inef cient for large datasets.

• ❌ Time-consuming as it checks each element.

• ❌ Not suitable for performance-critical applications.

Binary search

✅ 1. De nition:

• Binary Search is an ef cient searching algorithm used on sorted arrays or lists, where
the search space is repeatedly divided in half until the target element is found.

✅ 2. Steps:

1. Start with two pointers: low = 0, high = n - 1.


2. Find the middle index: mid = (low + high) / 2.
3. Compare the mid element with the target:
◦ If equal → Element found.

◦ If target < mid → Search in left half.

◦ If target > mid → Search in right half.

4. Repeat steps 2–3 until the element is found or low > high.
fi
fi
fi
✅ 3. Uses / Applications:

• Used in searching in sorted arrays or databases.

• Frequently used in competitive programming, search engines, and library functions (e.g.,
C++ STL’s binary_search()).

• Applied in problem-solving algorithms like nding square roots, search ranges, etc.

✅ 4. Time Complexities:

✅ 5. Space Complexity:

• Iterative Binary Search: O(1) – constant space

• Recursive Binary Search: O(log n) – due to function call stack

✅ 6. Advantages:

• ✅ Much faster than linear search for large datasets.

• ✅ Requires fewer comparisons (logarithmic growth).

• ✅ Ef cient for repeated searches in sorted data.

✅ 7. Disadvantages:

• ❌ Only works on sorted data.

• ❌ Slightly more complex to implement than linear search.

• ❌ For frequent insertions/deletions, maintaining sorted order is expensive.


fi
fi
Stacks and Queues: ADT Stack and its operations: Algorithms and their complexity
analysis, Applications of Stacks:Expression Conversion and evaluation –
corresponding algorithms and complexity analysis. ADT queue, Types of Queue:
Simple Queue, Circular Queue, Priority Queue; Operations on each types of
Queues: Algorithms and their analysis.

ADT Stack and its opera ons


🔹 What is ADT Stack?

• ADT (Abstract Data Type) de nes a logical model of a data structure without specifying its
implementation.

• A stack is a linear ADT that operates in a LIFO (Last-In-First-Out) manner.

• Only one end (top) is used for both insertion and deletion.

🔹 Real-life Example:

• A stack of plates: You add to the top, and remove from the top.

🔹 Key Properties:

• Access is limited to the top of the stack.

• Operations occur at one end only.

• Can be implemented using arrays (static) or linked lists (dynamic).

Basic Operations of Stack ADT


fi
ti
Applications of Stack ADT

• Function call management (system call stack)

• Expression evaluation (post x/in x conversion)

• Undo/Redo functionality in editors

• Backtracking algorithms (e.g., maze solving)

• Syntax parsing in compilers

• Browser history navigation

🔹 Implementation Notes:

• Using Array:

◦ Fixed size

◦ Easy to implement

◦ May lead to over ow

• Using Linked List:

◦ No size limit

◦ More memory overhead (due to pointers)


fl
fi
fi
Expression Conversion and evaluation – corresponding algorithms and
complexity analysis.

( practice 3 videos)
( Practice 3 videos)
ADT queue
🔹 De nition:
• A Queue is a linear Abstract Data Type (ADT) that follows the FIFO (First In, First Out)
principle.

• Elements are inserted at the rear and removed from the front.

• Commonly used in task scheduling and buffering operations.

🔹 Basic Operations:

🔹 Queue Types:
• Linear Queue

• Circular Queue

• Deque (Double Ended Queue)

• Priority Queue

🔹 Implementations:
• Array: Simple, but xed size.

• Linked List: Dynamic size, no over ow unless memory is full.


fi
fi
fl
Applications of Queue
1. CPU Scheduling – Round-robin and multi-level queues.

2. I/O Buffering – Keyboard, disk, and printer queues.

3. Breadth-First Search (BFS) in Graphs.

4. Handling Requests – Like print spooling, call centers, and service counters.

5. Data packet management – In routers and switches (network queues).

6. Operating system tasks – Managing jobs, interrupts, etc.


Types of Queue: Simple Queue, Circular Queue, Priority Queue;
Operations on each types of Queues: Algorithms and their analysis.
🔹 1. Simple Queue (Linear Queue)

🔸 De nition:

A standard FIFO queue where insertion happens at the rear, and deletion at the front.

🔸 Operations & Algorithms:

• Enqueue(x):
◦ rear = rear + 1; queue[rear] = x;
• Dequeue():
◦ x = queue[front]; front = front + 1;
🔸 Drawbacks:

• Space is wasted as front moves forward and is not reused.

🔸 Time Complexity:

🔹 2. Circular Queue

🔸 De nition:

A variation of a simple queue where the last position is connected to the rst, forming a circle.

🔸 Operations & Algorithms:

• Enqueue(x):
◦ rear = (rear + 1) % size; queue[rear] = x;
• Dequeue():
◦ x = queue[front]; front = (front + 1) % size;
fi
fi
fi
🔸 Advantages:

• Reuses space ef ciently.

• Solves the wastage problem of a linear queue.

🔸 Time Complexity:

All operations: O(1)

🔹 3. Priority Queue

🔸 De nition:

Each element has a priority. The element with higher priority is dequeued rst. If priorities
are equal, FIFO order is followed.

🔸 Types:

• Min Priority Queue – smallest priority dequeued rst.

• Max Priority Queue – highest priority dequeued rst.

🔸 Operations & Algorithms:

• Insert(x, priority):

◦ Add based on priority (may involve sorting or heap insert).

• Delete():

◦ Remove element with highest (or lowest) priority.

🔸 Implementations:

• Using Arrays → O(n) insertion or deletion

• Using Heaps (Binary Heap) → O(log n)

🔸 Time Complexity:
fi
fi
fi
fi
fi
🔹 4. Double-Ended Queue (Deque)

🔸 De nition:

Allows insertion and deletion from both ends (front and rear).

🔸 Operations:

• insertFront(x)
• insertRear(x)
• deleteFront()
• deleteRear()
🔸 Time Complexity:

All operations in O(1) if implemented with a doubly linked list or circular array.
fi
Why circular queue is better than simple queue ?

🔹 1. Ef cient Use of Memory

• In a simple queue, once elements are dequeued from the front, those spaces cannot be
reused.

• In a circular queue, freed spaces are reused by wrapping around the array — no
memory wastage.

🔹 2. Avoids the "False Over ow" Problem

• In a simple queue, even if there is space at the beginning, rear can't insert if it's at the
end — leads to over ow.

• In a circular queue, insertion continues from the front if space is available — avoids
this problem.

🔹 3. Better for Fixed Size Buffers

• Circular queue is ideal for situations where memory is xed, like in:

◦ CPU scheduling (round-robin)

◦ Buffers in I/O devices

◦ Network traf c management

🔹 4. Supports Continuous Enqueue and Dequeue

• Circular queue allows continuous operations without needing to shift elements or resize
the array.

🔹 5. Constant Time Operations

• Both insertion and deletion take O(1) time in circular queue, just like a simple queue —
but with improved space utilization.
fi
fi
fl
fl
fi
Difference between stack and queue
Algorithm for push and pop operation in stack
🔹 1. PUSH Operation (Insert an element)
Algorithm (using array):

Explanation:

• Check if the stack is full.


• Increment the top index.
• Insert the value at stack[top].

🔹 2. POP Operation (Remove the top element)


Algorithm (using array):

Explanation:

• Check if the stack is empty.


• Store the top element.
• Decrease the top index.
Linked Lists: Singly linked lists: Representation in memory, Algorithms of several operations:
Traversing, Searching, Insertion into, Deletion from linked list; Linked representation of Stack
and Queue, Header nodes, Doubly linked list: operations on it and algorithmic analysis;
Circular Linked Lists: all operations their algorithms and the complexity analysis.

Linked Lists: Singly linked lists: Representa on in memory


🔹 De nition:

A Singly Linked List (SLL) is a linear data structure where each element, called a node,
contains two parts:

1. Data – stores the value.

2. Next Pointer – stores the address of the next node in the list.

The list starts with a special node called the head, and the last node points to NULL (indicating the
end of the list).

🔹 Main Operations:

1. Insertion
◦ At beginning, end, or any position

◦ Ef cient compared to arrays (no shifting required)

2. Deletion
◦ From beginning, end, or speci c position

◦ Just update the link to bypass the deleted node

3. Traversal
◦ Start from head, follow .next until NULL

4. Search
◦ Linearly check each node's data

🔹 Advantages:

• Dynamic memory allocation (grows and shrinks at runtime)

• Ef cient insertions/deletions at beginning or middle

• No memory wastage like static arrays


fi
fi
fi
fi
ti
🔹 Disadvantages:

• No random access (must traverse from head)

• Extra memory for pointers

• Slower access time compared to arrays

🔹 Applications:

• Used in stacks, queues, graphs

• Ef cient memory usage in dynamic data handling

• Symbol tables, hashing, directory structure

🔹 Node Structure in C:

🔹 Example Illustration:
fi
Memory Representation of a Linked List

Unlike arrays, where elements are stored in contiguous memory locations, linked
lists allocate memory dynamically for each node. This means that each node
can be located anywhere in the memory and they are connected via pointers.

Before looking at the memory representation of singly linked list. Let's first create a
linked list having four nodes. Each node has two parts: one part holds a value
(data) and the other part holds the address of the next node. The first node is
called the head node and it points to the first element in the list. The last node in
the list points to NULL, which means there are no more nodes after it.
Algorithms of several operations: Traversing, Searching,
Insertion into, Deletion from linked list

✅ 1. Traversing a Linked List

Algorithm (Step-wise):

1. Start from the head node.


2. While the current node is not NULL:
◦ Print or process current->data.

◦ Move to the next node: current = current->next.

✅ 2. Searching in a Linked List

Algorithm (Step-wise):

1. Start from the head node.


2. While the current node is not NULL:
◦ If current->data == target, return success.

◦ Move to current->next.

3. If end is reached, return not found.

✅ 3. Insertion into a Linked List

🔹 a) At the Beginning

Steps:

1. Create a new node.

2. Set newNode->data = value.

3. Set newNode->next = head.

4. Update head = newNode.


🔹 b) At the End

Steps:

1. Create a new node with data = value and next = NULL.

2. If head == NULL, set head = newNode.

3. Else, traverse to the last node.

4. Set last->next = newNode.

🔹 c) At a Given Position (pos)

Steps:

1. Create new node.

2. Traverse to node at pos - 1.

3. Set newNode->next = current->next.

4. Set current->next = newNode.

✅ 4. Deletion from a Linked List

🔹 a) From the Beginning

Steps:

1. If head == NULL, list is empty.

2. Save temp = head.

3. Update head = head->next.

4. Free the old head (free(temp)).

🔹 b) From the End

Steps:

1. If head == NULL, list is empty.

2. If only one node: free(head), head = NULL.

3. Else, traverse to second-last node.

4. Set secondLast->next = NULL, free last node.


🔹 c) From a Given Position (pos)

Steps:

1. Traverse to node at pos - 1.

2. Save temp = current->next.

3. Set current->next = temp->next.

4. Free temp.
Linked representation of Stack and Queue, Header nodes

Implementation of Stack using Linked List in C


De nition:

A stack is a LIFO (Last In First Out) data structure. In a linked list representation, each node
contains:

• Data

• Pointer to the next node

The top of the stack points to the most recently added node.

🔹 Operations:

• Push (Insert at beginning):

◦ Create new node, set its next to top, update top = new node.

• Pop (Delete from beginning):

◦ Remove the node pointed by top, move top = top->next.


fi
Implementation of Stack using Linked List in C

🔹 De nition:

A queue is a FIFO (First In First Out) data structure. It uses:

• A front pointer (for deletion)

• A rear pointer (for insertion)

Each node contains data and a pointer to the next node.

🔹 Operations:

• Enqueue (Insert at rear):

◦ Create new node, set rear->next = new node, then update rear.

• Dequeue (Delete from front):

◦ Remove node at front, move front = front->next.

🔹 Advantages:

• No limit on number of elements

• Ef cient dynamic memory usage


fi
fi
Header Node
🔹 De nition:

A header node is a special node placed at the beginning of a linked list that does not store actual
data, only metadata like:

• Count of nodes

• Pointer to the rst node

• Other auxiliary info

🔹 Purpose:

• Simpli es insertion and deletion (no special case for empty list)

• Useful for implementing circular lists, generalized lists

🔹 Structure Example:
fi
fi
fi
How a linked lists can be used to implement stack ?

1. Stack De nition: A stack is a linear data structure following the Last-In-First-Out


(LIFO) principle, where elements are added (pushed) and removed (popped) from the
same end, called the "top."
2. Linked List Suitability: A singly linked list is ideal for a stack because it allows
dynamic memory allocation and ef cient insertion/deletion at one end (head).
3. Operations:
◦ Push: Add an element at the head (top) of the linked list.
◦ Pop: Remove and return the head element.
4. Advantages: No xed size (unlike arrays), and O(1) time complexity for push and pop
operations.
5. Implementation: Use a node structure with data and a pointer to the next node, and
maintain a pointer to the top (head) of the list.

Steps: Implementation

1. De ne Node Structure:
◦ Create a Node structure with two elds: data (to store the element) and next (pointer to
the next node).
◦ Example: In C, struct Node { int data; struct Node* next; };
2. Initialize Stack:
◦ Declare a pointer top initialized to NULL to represent an empty stack.
◦ Example: struct Node* top = NULL;
3. Push Operation:
◦ Create a new node, assign the input data, and set its next to the current top.
◦ Update top to point to the new node.
◦ Time complexity: O(1).
4. Pop Operation:
• Check if the stack is empty (top == NULL). If so, return an error (e.g., "Stack Under ow").
• Store the top node’s data, update top to top->next, and free the old top node.
• Return the stored data.
• Time complexity: O(1).

5. Additional Operations (Optional):


• Peek: Return the data of the top node without removing it (O(1)).
• isEmpty: Check if top == NULL to determine if the stack is empty (O(1)).
fi
fi
fi
fi
fi
fl
Doubly linked list: operations on it and algorithmic analysis

🔹 What is a Doubly Linked List (DLL)?

A Doubly Linked List is a linear data structure where each node contains three elds:

1. Data – stores the value

2. Prev – pointer to the previous node

3. Next – pointer to the next node

Nodes are connected in both directions, allowing traversal forward and backward.

Operations on Doubly Linked List

✅ 1. Traversal

• Forward Traversal: From head to tail using next pointers.

• Backward Traversal: From tail to head using prev pointers.

• Time Complexity: O(n)

✅ 2. Insertion

• At the Beginning:

◦ Create a new node.

◦ Set newNode->next = head, head->prev = newNode

◦ Update head = newNode

◦ Time Complexity: O(1)

• At the End:

◦ Traverse to the last node.

◦ Set last->next = newNode, newNode->prev = last

◦ Time Complexity: O(n)

• At a Given Position:

◦ Traverse to (position - 1)th node.

◦ Adjust next and prev pointers accordingly.

◦ Time Complexity: O(n)


fi
✅ 3. Deletion

• From the Beginning:

◦ Move head = head->next, update head->prev = NULL

◦ Free old head.

◦ Time Complexity: O(1)

• From the End:

◦ Traverse to last node.

◦ Update secondLast->next = NULL

◦ Free last node.

◦ Time Complexity: O(n)

• From a Given Position:

◦ Traverse to that node.

◦ Adjust surrounding next and prev links.

◦ Free the node.

◦ Time Complexity: O(n)

✅ 4. Search

• Traverse and compare each node’s data with target value.

• Can be done in forward or backward direction.

• Time Complexity: O(n)

🔹 Advantages:

• Bi-directional traversal

• Easier deletion (no need to keep track of previous node manually)

• More exible than singly linked list

🔹 Disadvantages:

• Uses more memory (extra prev pointer)

• More complex pointer handling


fl
🔹 Structure of a Node in C:
Circular Linked Lists: all operations their algorithms and the
complexity analysis.
🔹 De nition:

A Circular Linked List is a type of linked list where the last node points back to the rst node,
forming a circle. It can be:

• Singly Circular: Only next pointer

• Doubly Circular: Both next and prev pointers

In singly circular, the next of the last node points to the head, not NULL.

Operations on Circular Linked List (CLL)

✅ 1. Traversal

Algorithm:

1. Start from the head.


2. Use a do-while loop:
◦ Print current->data

◦ Move to current = current->next

◦ Until back to head

Time Complexity: O(n)

✅ 2. Insertion

🔹 a) At the Beginning

Steps:

1. Create a new node.


2. If list is empty:
◦ Set newNode->next = newNode
◦ head = newNode
3. Else:
◦ Traverse to last node
◦ Set newNode->next = head
fi
fi
◦ last->next = newNode
Time Complexity: O(n)

🔹 c) At a Given Position

Steps:

1. Traverse to (pos – 1)th node

2. Update links to insert new node

Time Complexity: O(n)

✅ 3. Deletion

🔹 a) From the Beginning

Steps:

1. If only one node: free(head), head = NULL


2. Else:
◦ Traverse to last node

◦ Set last->next = head->next

◦ Store temp = head

◦ head = head->next

◦ Free temp

Time Complexity: O(n)

🔹 b) From the End

Steps:

1. Traverse to second-last node

2. Set its next = head

3. Free last node

Time Complexity: O(n)


🔹 c) From a Given Position

Steps:

1. Traverse to node at (pos – 1)

2. Adjust links to bypass the target node

3. Free the target node

Time Complexity: O(n)

✅ 4. Search

Steps:

1. Start from head, use do-while loop

2. Compare each node’s data

3. Return position if found

Time Complexity: O(n)

Basic Structure (Singly Circular):


Advantages and disadvantages of circular linked lists .
Advantages of Circular Linked Lists
1. Ef cient traversal: Can loop through the list from any node without needing to return to head
explicitly.

2. No NULL at end: Last node points to the head, eliminating the need for NULL termination.

3. Good for circular tasks: Ideal for round-robin scheduling, real-time systems, or playlist
applications.

4. Insertion/deletion at ends: Easier if a tail pointer is maintained (no need to traverse entire list).

5. Uniform structure: No special condition for the last node, simpli es some operations.

Disadvantages of Circular Linked Lists

1. More complex logic: Harder to implement than singly or doubly linked lists.

2. End detection is tricky: Traversing must be controlled to avoid in nite loops.

3. Dif cult debugging: Errors like in nite traversal are harder to detect.

4. Not memory ef cient: Still requires pointers, like other linked lists.

5. No direct access: Must traverse node by node; random access is not possible.
fi
fi
fi
fi
fi
fi
Linear lists
🔹 De nition:

A Linear List is a data structure where elements are arranged in a linear (sequential) order, and
each element (except the rst) has a unique predecessor, and each (except the last) has a unique
successor.

🔹 Examples of Linear Lists:

• Arrays

• Linked Lists

• Stacks

• Queues

🔹 Characteristics:

1. Linear structure: Elements are organized one after the other.

2. Single traversal path: You can move through elements in one straight direction.

3. Memory representation: Can be stored in contiguous (arrays) or non-contiguous (linked


lists) memory.

🔹 Operations:

• Insertion

• Deletion

• Traversal

• Searching

• Updating

🔹 Types:

1. Static Linear List – Fixed size (e.g., arrays)

2. Dynamic Linear List – Size can grow/shrink (e.g., linked lists)

🔹 Applications:

• Managing ordered data

• Implementing stacks and queues

• Symbol tables, buffer management, etc.


fi
fi
Polynomial equations

( complete 8 videos )
Difference between array and single linked lists
Compare and constrast linked list with static and dynamic array
Trees: Basic Tree Terminologies, Different types of Trees: Binary Tree, Threaded
Binary Tree, Binary Search Tree, AVL Tree; Tree operations on each of the trees
and their algorithms with complexity analysis. Applications of Binary Trees. B
Tree, B+ Tree: definitions, algorithms and analysis
Trees: Basic Tree Terminologies
What is a Tree?

A tree is a non-linear hierarchical data structure used in computer science. It consists of nodes connected by edges,
and it is widely used to represent hierarchical relationships, like folders in a le system or family trees.

Basic Terminology of Trees :

1. Node:
A basic unit in a tree that contains data.

2. Root:
The topmost node of a tree. Every tree has only one root.

3. Parent:
A node that has one or more child nodes.

4. Child:
A node directly connected to another node when moving away from the root.

5. Leaf (Terminal node):


A node with no children.

6. Internal Node:
A node that has at least one child (i.e., not a leaf).

7. Edge:
The connection between two nodes (parent to child).

8. Subtree:
A tree formed by a node and its descendants.

9. Level:
Distance from the root. Root is at level 0.

10. Height (or Depth):


◦ Height of a node: Longest path from the node to a leaf.

◦ Height of tree: Height of the root node.

11. Degree of Node:


Number of children a node has.

12. Degree of Tree:


Maximum degree among all nodes in the tree.

13. Binary Tree:


A tree where each node has at most 2 children.

14. Balanced Tree:


A tree where the height of the left and right subtrees differ by at most one.
fi
Different types of Trees: Binary Tree, Threaded Binary Tree,
Binary Search Tree , AVL Tree

Binary Tree

A Binary Tree is a hierarchical non-linear data structure where each node has at most
two children, referred to as the left and right child.

Key Points:

• Root Node: The topmost node of the tree.

• Children: Left and right sub-nodes of a parent node.

• Leaf Node: Node with no children.

• Internal Node: Node with at least one child.

Types of Binary Trees:

1. Full Binary Tree – Every node has 0 or 2 children.

2. Complete Binary Tree – All levels are lled except possibly the last, lled left to
right.

3. Perfect Binary Tree – All internal nodes have 2 children, and all leaves are at the
same level.

4. Skewed Binary Tree – Nodes are only on one side (left or right).

Tree Traversals:

• Inorder (Left → Root → Right)

• Preorder (Root → Left → Right)

• Postorder (Left → Right → Root)

• Level Order (BFS using Queue)

Applications:

• Expression trees (mathematical expressions)

• File systems, compilers

• Binary Search Tree (BST), Heaps

• Decision-making systems
fi
fi
Advantages and Disadvantages of binary tree
✅ Advantages of Binary Tree

1. Ef cient Hierarchical Representation


◦ Ideal for representing hierarchical data, such as le systems, organizational
structures, or syntax trees.

2. Fast Search, Insert, Delete (in BST)


◦ In a well-balanced Binary Search Tree (BST), operations like search, insert, and
delete take O(log n) time.

3. Basis for Advanced Trees


◦ Binary Trees are the foundation for AVL Trees, Red-Black Trees, B-Trees, etc.

4. Easy Tree Traversals


◦ Supports various traversal techniques (inorder, preorder, postorder, level-order)
for different use cases.

5. Memory Ef cient (Linked Implementation)


◦ Nodes are created dynamically; no need for a xed-size array.

6. Used in Expression Evaluation


◦ Binary trees are used in expression trees for evaluating arithmetic expressions.

❌ Disadvantages of Binary Tree

1. Can Become Unbalanced


◦ In an unbalanced binary tree (like skewed trees), time complexity degrades to O(n)
for search, insert, delete.

2. More Complex Than Arrays or Linked Lists


◦ Requires more memory (pointers) and logic for implementation.

3. Traversal Overhead
◦ Navigating or processing all nodes (e.g., for search) may require recursive or
iterative traversal logic.

4. Dif cult to Balance Manually


◦ Keeping the tree balanced manually is not easy without using special trees like AVL
or Red-Black Trees.
fi
fi
fi
fi
fi
5. Not Cache Friendly
◦ Unlike arrays, binary trees may result in poor cache performance due to scattered
memory access.
Wri an algori m for non recursive in order aversal of a readed binary ee
Step 1:

Set curr = leftmost(root)


👉 (Start from the leftmost node of the tree)

Step 2:

While curr is not NULL, repeat Steps 3–6

Step 3:

Visit the current node (e.g., print(curr->data))

Step 4:

If curr->rthread == true,
👉 Set curr = curr->right
(Go to in-order successor using thread)

Step 5:

Else
👉 Set curr = leftmost(curr->right)
(Move to the leftmost node in the right subtree)

Step 6:

Repeat from Step 2 until curr == NULL


te
th
tr
th
tr
Wri an algori m left rota a binary ee

Step 1:

Let X be the node where you want to perform left rotation.


Step 2:

Let Y = X.right
👉 (Get the right child of X, which will become the new root of the rotated subtree)

Step 3:

Set X.right = Y.left


👉 (Move Y's left subtree to be the new right subtree of X)

Step 4:

Set Y.left = X
👉 (Make X the left child of Y)

Step 5:

If X was the root of the whole tree, now update the root to Y

Step 6:

Return Y as the new root of the rotated subtree.


te
th
to
te
tr
prove that the maximum no. of nodes in a binary tree of depth k is
(2^k-1).
Write an algorithm for inserting an element into a binary tree

🔸 Step 1:

Create a new node newNode with the given value.

🔸 Step 2:

If the tree is empty (root == NULL),


→ Set root = newNode and return.

🔸 Step 3:

Initialize an empty queue and enqueue the root node.

🔸 Step 4:

While the queue is not empty, repeat:

• Dequeue a node and store it in temp.

🔸 Step 5:

Check temp.left:

• If temp.left == NULL,
→ Set temp.left = newNode and return.

• Else, enqueue temp.left.

🔸 Step 6:

Check temp.right:

• If temp.right == NULL,
→ Set temp.right = newNode and return.

• Else, enqueue temp.right.


readed Binary Tree

A Threaded Binary Tree is a special form of binary tree where null pointers in the binary tree
are replaced with "threads", which are pointers to the in-order predecessor or successor.

Key Concepts:

• In a normal binary tree, many pointers (left/right) are null, especially in leaf nodes.

• A threaded binary tree reuses null pointers to make traversal (especially in-order) faster
without using recursion or stack.

Types of Threaded Binary Trees:

1. Single Threaded:
◦ Either left or right null pointers are used for threading.

2. Double Threaded:
◦ Both left and right null pointers are replaced with threads.

Advantage:

• Faster in-order traversal

• Less memory overhead (no need for stack/recursion)

• Makes traversal more ef cient in systems with limited resources.

Applications:

• Used in memory-ef cient tree traversal.

• Ideal in embedded systems where memory is limited.

• Useful for implementing non-recursive in-order traversals.


Th
fi
fi
Binary Search Tree

A Binary Search Tree (BST) is a special type of binary tree where each node follows this property:

• Left child < Parent node < Right child

This ordering helps in ef cient search, insertion, and deletion operations.

Key Properties:

• Each node has at most two children.

• Left subtree contains values less than the node.

• Right subtree contains values greater than the node.

• No duplicate values (in standard BST).

Inorder Traversal:

• Returns values in sorted order.

Applications:

• Searching and sorting

• Symbol tables (compilers)

• Database indexing

• Maps and sets (in libraries)

Time Complexities:
fi
Wri an algori m st whe er a given binary ee is a binary search ee .

Step 1: Start at the root node

• Begin traversal from the root of the binary tree.

Step 2: Use a helper function with a valid range

• For each node, check whether it lies within a valid min and max range.
• Initially, the range is (-∞, +∞).

Step 3: Check current node value

• If the current node’s value is not in the range (min < node.data < max),
→ Return false (violates BST property).

Step 4: Recur for left subtree

• Call the same function for the left child,


→ Update the max value to node.data (since left values must be smaller).

Step 5: Recur for right subtree

• Call the same function for the right child,


→ Update the min value to node.data (since right values must be greater).

Step 6: If all nodes follow BST rules

• If both left and right subtrees return true, then the current tree is a valid BST.
te
th
to
te
th
tr
tr
AVL Tree

An AVL Tree is a self-balancing Binary Search Tree (BST) where the height difference
(balance factor) between the left and right subtrees of any node is at most 1.
It is named after its inventors Adelson-Velsky and Landis.

Key Properties:

• Balance Factor = Height(left subtree) - Height(right subtree)

• For every node, the balance factor must be -1, 0, or +1

• Ensures that the tree remains balanced after insertion or deletion

Rotations to Maintain Balance:

To restore balance when the tree becomes unbalanced (balance factor > 1 or < -1), rotations
are used:

1. Left Rotation (LL Imbalance)

2. Right Rotation (RR Imbalance)

3. Left-Right Rotation (LR Imbalance)

4. Right-Left Rotation (RL Imbalance)

Applications:

• Used in databases, le systems, and memory indexing

• Ideal where data changes frequently

• Ensures ef cient search, insert, and delete operations

Time Complexities:
fi
fi
Tree operations on each of the trees and their algorithms with
complexity analysis
Tree full playlist
Applications of Binary Trees

🔸 1. Expression Trees

• Used in compilers to represent arithmetic expressions.

• Each internal node represents an operator (+, -, *, /), and leaf nodes represent operands
(variables or constants).

• Enables easy evaluation and conversion to post x/pre x/in x forms.

🔸 2. Hierarchical Data Representation

• Binary trees naturally model hierarchical relationships.

• Example: File systems, organization charts, and decision trees.

🔸 3. Binary Search Trees (BST)

• Used for fast searching, insertion, and deletion of data (O(log n) time on average).

• Common in database indexing, symbol tables in compilers, etc.

🔸 4. Huffman Coding Tree

• Used in data compression algorithms like Huffman Coding.

• Constructs a binary tree for assigning variable-length codes to input characters.

🔸 5. Routing and Network Algorithms

• Used in binary trie structures (e.g., IP routing in networking) to store binary pre xes.

• Helps with ef cient IP lookups.

🔸 6. Game Trees and AI

• Binary trees are used in minimax algorithms and decision-making in game playing (like
chess, tic-tac-toe).

• Each node represents a game state; child nodes are possible future states.
fi
fi
fi
fi
fi
🔸 7. Syntax Trees in Compilers

• Used in parsing source code.

• Represents syntactic structure of code based on grammar rules.

🔸 8. Priority Queues (Using Binary Heap)

• Though not strictly a binary tree, binary heap is a complete binary tree used to implement priority queues.

🔸 9. Balanced Trees (like AVL, Red-Black Trees)

• Maintain a balanced structure for ef cient searching and manipulation.

• Used in databases, le systems, and memory management.


fi
fi
B Tree, B+ Tree: de nitions, algorithms and analysis

B Tree
🔹 De nition: A B-Tree is a self-balancing search tree in which nodes can have more than
two children. It maintains sorted data and allows searching, insertion, deletion in
logarithmic time. Used extensively in databases and le systems.

• All leaf nodes are at the same depth.

• Internal nodes can have m children, where m is the order of the B-Tree.

• Each node (except root) must have at least ⌈m/2⌉ children.

• Keys within a node are sorted.

🔹 Algorithms:

1. Search (B-Tree)

• Start at the root.

• Perform binary search on keys of current node.

• If not found and node is not a leaf, go to the appropriate child.

Time Complexity: O(log n)

2. Insertion

• Always insert key in a leaf node.

• If leaf is full, split it and promote the middle key to the parent.

• May cause recursive splits up to the root.

3. Deletion

• If key is in internal node, replace with predecessor or successor.

• If deletion causes under ow, perform:

◦ Borrowing from sibling, or

◦ Merging with sibling.

🔹 Analysis:
fi
fl
fi
fi
Properties of B Tree
🔹 1. Node Structure

• Each node can have at most m - 1 keys and m children.

🔹 2. Sorted Keys

• Keys within a node are stored in sorted order.

• All keys in the left subtree of a key are smaller, and all keys in the right subtree are larger.

🔹 3. Number of Children

• Each internal node with k keys has exactly k + 1 children.

🔹 4. Balanced Tree

• The tree is balanced: all leaf nodes are at the same level (equal depth).

🔹 5. Root Node Constraints

• The root node must have at least 1 key (unless the tree is empty).

• It can have minimum 2 children if it's not a leaf.

🔹 6. Minimum and Maximum Keys

• Every non-root internal node must have at least ⌈m/2⌉ − 1 keys.

• And at most m - 1 keys.

🔹 7. Dynamic Growth

• B-Trees grow and shrink dynamically with insertions and deletions without losing balance.

🔹 8. Ef cient Operations

• Search, insert, and delete operations take O(log n) time.

🔹 9. Used in Disk-based Storage

• B-Trees are designed to minimize disk I/O by keeping keys in nodes that match disk block sizes.
fi
B+ Tree
🔹 De nition:

A B+ Tree is an extension of B-Tree where all data records are stored at the leaf level, and
internal nodes only store keys (no data).

• Leaf nodes are linked (using pointers) – supports range queries ef ciently.

• Used in database indexing (like in MySQL, PostgreSQL).

• Internal nodes guide the search; only leaf nodes store actual records.

🔹 Algorithms:

1. Search (B+ Tree)

• Traverse down through internal nodes to reach leaf.

• Sequential access possible through linked leaves.

2. Insertion

• Insert into correct leaf.

• If leaf is full, split and promote key to internal node.

3. Deletion

• Delete from leaf.

• If under ow, borrow or merge, same as B-tree.

🔹 Analysis:
fi
fl
fi
Properties of B+ Tree
🔹 1. All Data is in Leaf Nodes

• Only leaf nodes store actual data/records.

• Internal nodes only store keys used for navigation (indexing).

🔹 2. Leaf Nodes Are Linked

• All leaf nodes are connected using linked list pointers.

• Enables fast range queries and sequential access.

🔹 3. Internal Nodes as Index

• Internal (non-leaf) nodes act as routing indexes.

• They do not store any data values, only keys.

🔹 4. Balanced Tree

• Like B-Tree, B+ Tree is always balanced: all leaf nodes are at the same level.

🔹 5. Number of Keys and Children

For a B+ Tree of order m:

• Internal node can have at most m - 1 keys and m children.

• Each internal node must have at least ⌈m/2⌉ children (except root).

• Leaf nodes can have up to m - 1 keys and pointers to data.

🔹 6. Root Node Rules

• The root must have at least 2 children if it is not a leaf.

🔹 7. Ef cient Disk Access

• Designed to minimize disk I/O.

• Large number of keys per node matches disk block size.


fi
🔹 8. Faster Range Queries

• Since data is only in leaf nodes, and they are linked, range scans are very ef cient.

🔹 9. Height is Low

• B+ Trees maintain a shallow height even for large data, due to high branching factor.

🔹 10. Duplicates in Internal Nodes

• Keys from leaf nodes may appear multiple times in internal nodes (as guide keys).

fi
Di erence between B Tree and B+ Tree
ff
Applica ons of B Tree and B+ Tree

Applications of B Tree
1. Database Indexing
◦ B-Trees help manage and search records ef ciently.

◦ Common in systems where random access is important.

2. File Systems
◦ Many le systems (e.g., NTFS, Unix/Linux) use B-Trees to index directories and les.

◦ Enables fast access even when directories contain thousands of entries.

3. Multilevel Indexing in DBMS


◦ Used to reduce the number of disk reads during query processing.

4. Virtual Memory and Paging Systems


◦ Helps in managing large amounts of paged memory blocks.

5. Symbol Tables in Compilers


◦ B-Trees can store identi ers for fast lookup, insertion, and deletion.

Applications of B+Tree
1. Relational Database Management Systems (RDBMS)

• Widely used in indexing (e.g., MySQL, PostgreSQL).

• Makes range queries, searching, and scanning ef cient.

2. File and Operating Systems

• Used in le indexing in systems like ext4, HFS+, etc.

• All records in leaf nodes make sequential access fast.

3. Data Warehousing

• Ideal for applications that involve scanning large blocks of sorted data.

4. NoSQL Databases

• Some document-based and key-value stores use variations of B+ Trees for fast lookups.

5. Search Engines

• For managing indexes of keywords and their occurrences ef ciently.


fi
fi
ti
fi
fi
fi
fi
fi
Sorting and Hashing: Objective and properties of di erent sorting algorithms: Selection
Sort, Bubble Sort, Insertion Sort, Quick Sort, Merge Sort, Heap Sort; Performance and
Comparison among all the methods, Hashing. Graph: BasicTerminologies and
Representations, Graph search and traversal algorithms and complexity analysis.

Selection Sort
🔸 De nition:

Selection Sort is a simple comparison-based sorting algorithm that repeatedly selects the smallest
(or largest) element from the unsorted part and places it at the correct position in the sorted
part.

🔹 Algorithm Steps:

1. Start from the rst index (i = 0)


◦ Assume the element at index i is the minimum.

2. Find the minimum element in the remaining unsorted array


◦ For each index j from i+1 to n-1, compare arr[j] with arr[min_index].

◦ If arr[j] < arr[min_index], update min_index = j.

3. Swap the found minimum element with the element at index i


◦ Swap arr[i] and arr[min_index].

4. Repeat steps 1–3 for the next index (i = 1 to n-2)


◦ After each iteration, one more element is sorted.

5. Continue until the entire array is sorted.

🔹 Time and Space Complexity:

Space Complexity: O(1) (in-place sort)

Stable: ❌ No (can be made stable with modi cations)

In-place: ✅ Yes
fi
fi
fi
ff
C Code

#include<stdio.h>
int main (void)
{
int a[50],i,j,temp,num;
printf(" Enter the numbers ");
scanf ("%d",&num);
for (i=0;i<num;i++)
{
scanf("%d",&a[i]);
}
for(i=0;i<num-1;i++)
{
for(j=i+1;j<num;j++)
{
if(a[i]>a[j])
{
temp=a[i];
a[i]=a[j];
a[j]=temp;
}
}
}
printf (" Array after Selection Sort ");
for(i=0;i<num;i++)
{
printf("%d",a[i]);
}
return 0 ;
}
Bubble sort
🔹 De nition:

Bubble Sort is a simple comparison-based sorting algorithm in which adjacent elements are
compared and swapped if they are in the wrong order. This process is repeated until the array is
sorted.

🔹 Algorithm Steps:

1. Start from the rst element of the array


◦ Let the array size be n.

2. Repeat the process for n - 1 passes


◦ Each pass ensures the largest unsorted element "bubbles up" to its correct position.

3. In each pass, compare adjacent elements


◦ For index j = 0 to n - i - 2, compare arr[j] and arr[j+1].

4. Swap if the elements are in the wrong order


◦ If arr[j] > arr[j+1], then swap them.

5. Continue comparing and swapping in the pass


◦ After each pass, one more element is sorted at the end.

6. Repeat until the array is completely sorted


◦ Optionally, use a flag to check if any swaps were made. If not, the array is already
sorted (optimized version).
🔹 Time and Space Complexity:

Space Complexity: O(1)

Stable Sort: ✅ Yes

In-place Sort: ✅ Yes


fi
fi
C Code

#include<stdio.h>
int main(void)
{
int n,i,j,arr[50];
printf("enter the number of the elements ");
scanf ("%d",&n);
for (i=0;i<n;i++)
{
scanf("%d",&arr[i]);
}
for(i=0;i<n-1;i++)
{
for(j=0;j<n-1;j++)
{
if(arr[j]>arr[j+1])
{
int temp = arr[j];
arr[j]= arr[j+1];
arr[j+1]=temp;

}
}
}
printf("Array after Bubble Sort\n");
for (i=0;i<n;i++)
{
printf(" %d ",arr[i]);
}
return 0;
}
Insertion sort
🔹 De nition:

Insertion Sort is a simple, comparison-based sorting technique where elements are inserted into
their correct position in a sorted part of the array.

It works similarly to how you sort playing cards in your hand.

🔹 Algorithm Steps:

1. Start from the second element (index i = 1)


◦ Assume the rst element is already sorted.

2. Pick the current element as the "key"


◦ key = arr[i]

3. Compare the key with elements in the sorted part (left side)
◦ Start from j = i - 1 down to 0.

4. Shift elements greater than the key to one position right


◦ While j >= 0 && arr[j] > key, shift arr[j] to arr[j+1].

5. Insert the key at the correct position


◦ After the shifting ends, insert the key at arr[j + 1].

6. Repeat steps 2–5 for all elements until the array is sorted
🔹 Time and Space Complexity:

Space Complexity: O(1)

Stable Sort: ✅ Yes

In-place Sort: ✅ Yes


fi
fi
C Code

#include<stdio.h>
int main (void)
{
int n,i,j,arr[50],temp;
printf("Enter the number of the elements : ");
scanf ("%d",&n);
for (i=0;i<n;i++)
{
scanf("%d",&arr[i]);
}
for (i=1;i<n;i++)
{
temp = arr[i];
j = i-1;
while (j>=0&&temp<=arr[j])
{
arr[j+1]=arr[j];
j=j-1;
}
arr[j+1]=temp;
}
printf (" Array after insertion sort ");
for (i=0;i<n;i++)
{
printf (" %d ",arr[i]);
}
return 0;

}
Quick sort
🔹 De nition:

Quick Sort is a divide-and-conquer sorting algorithm that selects a pivot element and partitions the
array into two subarrays:

• Left subarray: elements less than the pivot

• Right subarray: elements greater than the pivot

It then recursively sorts the subarrays.

🔹 Algorithm Steps:

Step 1: Choose a Pivot

• Choose a pivot element from the array (commonly the last, rst, or middle element).

Step 2: Partition the Array

• Rearrange the array so that:

◦ All elements less than the pivot are moved to its left.

◦ All elements greater than the pivot are moved to its right.

• Return the index of the pivot after partitioning (called partitionIndex).

Step 3: Recursively Apply Quick Sort

• Recursively apply the above steps to the:

◦ Left subarray: from low to partitionIndex - 1

◦ Right subarray: from partitionIndex + 1 to high

Step 4: Continue Recursion

• The recursion continues until the base case is reached: subarray size is 0 or 1 (already sorted).
🔹 Time and Space Complexity:
fi
fi
Space Complexity: O(log n) (for recursion stack)

In-place: ✅ Yes

Stable: ❌ No

✅ Advantages of Quick Sort

1. Fast on Average
◦ Quick Sort is one of the fastest sorting algorithms in practice with average time
complexity of O(n log n).

2. In-Place Sorting
◦ Requires no extra memory (unlike Merge Sort which needs O(n) space).

◦ Works using only a small recursive stack → Space ef cient (O(log n)).

3. Ef cient for Large Datasets


◦ Performs well on large arrays and real-world datasets due to low overhead.

4. Divide and Conquer Approach


◦ Easy to implement using recursion, and follows a structured method to break down
the problem.

5. Tail Call Optimizations Possible


◦ Can be optimized to avoid deep recursion (tail recursion).
fi
fi
#include<stdio.h> C Code

void quicksort(int arr[10], int, int);

int main( void ){


int arr[30],size,i;

printf("Enter size of the array: ");


scanf("%d", &size);

printf("Enter %d values: \n", size);


for(i = 0; i < size; i++)
scanf("%d",&arr[i]);

quicksort(arr,0,size-1);

printf("array after sorting is: ");


for(i = 0; i < size; i++)
printf(" %d",arr[i]);

return 0;
}

void quicksort(int arr[10],int first, int last)


{
int pivot,i,j,temp;

if(first < last){


pivot = first;
i = first;
j = last;

while(i < j){


while(arr[i] <= arr[pivot] && i < last)
i++;
while(arr[j] > arr[pivot])
j--;
if(i <j){
temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}

temp = arr[pivot];
arr[pivot] = arr[j];
arr[j] = temp;
quicksort(arr,first,j-1);
quicksort(arr,j+1,last);
}
}
Merge sort
🔹 De nition:

Merge Sort is a divide-and-conquer sorting algorithm that divides the array into smaller parts, sorts
them, and then merges the sorted parts back together.

🔹 Algorithm Steps:

Step 1: Divide

1. If the array has more than one element:


◦ Find the middle index mid = (low + high) / 2.

◦ Divide the array into two halves:

▪ Left: from low to mid

▪ Right: from mid + 1 to high

Step 2: Recursively Sort Both Halves

2. Apply merge sort recursively to the left and right halves.

Step 3: Merge

3. Merge the two sorted halves into a single sorted array:


◦ Compare elements from both halves.

◦ Place the smaller one into the new array.

◦ Repeat until all elements are merged in sorted order.


🔹 Time and Space Complexity:

• Space Complexity: O(n) (requires extra space for merging)

• Stable Sort: ✅ Yes

• In-place Sort: ❌ No
fi
C Code

#include <stdio.h>

void merge(int arr[], int left, int mid, int right) {


int temp[right - left + 1];
int i = left, j = mid + 1, k = 0;

while (i <= mid && j <= right) {


if (arr[i] <= arr[j]) {
temp[k++] = arr[j++];
}
else {
temp[k++] = arr[i++];
}
}

while (i <= mid) {


temp[k++] = arr[i++];
}

while (j <= right) {


temp[k++] = arr[j++];
}

for (i = left, k = 0; i <= right; i++, k++) {


arr[i] = temp[k];
}
}

void mergeSort(int arr[], int left, int right) {


if (left < right) {
int mid = (right + left) / 2;

mergeSort(arr, left, mid);

mergeSort(arr, mid + 1, right);

merge(arr, left, mid, right);


}
}

void printArray(int arr[], int size) {


for (int i = 0; i < size; i++) {
printf("%d ", arr[i]);
}
printf("\n");
}
int main(void) {
int n;
printf(" Enter the size of the array :");
scanf ("%d",&n);
int arr[n];
for(int i=0;i<n;i++){
printf(" Enter the element in the array %d ",i);
scanf (" %d",&arr[i]);
}
n = sizeof(arr) / sizeof(arr[0]);

printf("Original array: ");


printArray(arr, n);

mergeSort(arr, 0, n - 1);

printf("Sorted array: ");


printArray(arr, n);

return 0;
}
Heap sort
🔹 De nition:

Heap Sort is a comparison-based sorting algorithm that uses a binary heap data structure
(usually a max-heap) to sort elements in ascending order.

🔹 Algorithm Steps:

Step 1: Build Max Heap

1. Convert the unsorted array into a max-heap:


◦ A max-heap is a complete binary tree where each parent node is greater than or equal
to its children.

◦ Use a function like heapify() to maintain the heap property.

◦ Start from the last non-leaf node and apply heapify up to the root.

Step 2: Swap Root with Last Element

2. Swap the root element (maximum) with the last element in the heap.

Step 3: Reduce Heap Size

3. Reduce the size of the heap by one (exclude the last element, which is now sorted).

Step 4: Heapify the Root

4. Apply heapify() on the new root to restore the max-heap property.

Step 5: Repeat

5. Repeat steps 2 to 4 until only one element remains in the heap.


🔹 Time and Space Complexity:
Space Complexity: O(1) (in-place sort)

Stable Sort: ❌ No

In-place Sort: ✅ Yes


fi
C Code
#include <stdio.h>

void swap(int *a, int *b) {


int temp = *a;
*a = *b;
*b = temp;
}

void heapify(int arr[], int n, int i) {


int largest = i;
int left = 2 * i + 1;
int right = 2 * i + 2;

if (left < n && arr[left] > arr[largest])


largest = left;

if (right < n && arr[right] > arr[largest])


largest = right;

if (largest != i) {
swap(&arr[i], &arr[largest]);

heapify(arr, n, largest);
}
}

void heapSort(int arr[], int n) {


int i;

for (i = n / 2 - 1; i >= 0; i--)


heapify(arr, n, i);

for (i = n - 1; i >= 0; i--) {


swap(&arr[0], &arr[i]);
heapify(arr, i, 0);
}
}
void printArray(int arr[], int n) {
int i;
for (i = 0; i < n; ++i)
printf("%d ", arr[i]);
printf("\n");
}
int main(void) {
int arr[] = {12, 11, 13, 5, 6, 7};
int n = sizeof(arr) / sizeof(arr[0]);

printf("Original array:\n");
printArray(arr, n);
heapSort(arr, n);

printf("Sorted array (Heap Sort):\n");


printArray(arr, n);

return 0;
}
Hashing

🔹 De nition:

Hashing is a technique used to convert a key into an index of an array (called a hash table) using a
hash function. It allows fast access, insertion, and deletion of data in average-case O(1) time.

🔹 Purpose of Hashing:

• To store and retrieve data quickly

• Widely used in hash tables, databases, and caches

• Ef cient for searching large datasets

🔹 Key Concepts:

1. Hash Function:
◦ A function h(key) that maps a given key to an index in the hash table.

◦ Example: index = key % table_size

◦ A good hash function distributes keys uniformly to minimize collisions.

2. Hash Table:
◦ A xed-size array where data is stored based on hashed key values.

◦ Index range: 0 to table_size - 1.

3. Collision:
◦ Occurs when two keys hash to the same index.

◦ Must be resolved using collision resolution techniques.

🔹 Collision Resolution Techniques:

1. Chaining:
◦ Each table index contains a linked list.

◦ Multiple elements at the same index are chained together.


fi
fi
fi
2. Open Addressing:
◦ All elements stored in the table itself.

◦ If collision occurs, use probing to nd the next empty slot:

▪ Linear Probing: check next slot sequentially

▪ Quadratic Probing: gap increases quadratically

▪ Double Hashing: use a second hash function

🔹 Advantages of Hashing:

• Fast average time complexity: O(1)

• Ef cient for search, insert, and delete

• Easy to implement and highly practical

🔹 Disadvantages of Hashing:

• Collisions can degrade performance to O(n)

• Requires good hash function design

• Performance drops if load factor is high (i.e., table is too full)

• Not suitable for ordered data (like binary search)

🔹 Applications of Hashing:

• Symbol tables in compilers

• Caching and indexing

• Password storage (secure hashes)

• Implementing sets and maps


fi
fi
Graph
A graph is a collection of vertices (nodes) and edges (connections) that represent relationships
between pairs of elements.

🔹 Basic Terms:

1. Vertex (Node):
◦ A fundamental unit in a graph where edges meet.

◦ Denoted as V.

2. Edge:
◦ A connection between two vertices.

◦ Denoted as E.

3. Degree:
◦ Number of edges connected to a vertex.

◦ In-degree: Number of incoming edges.

◦ Out-degree: Number of outgoing edges.

4. Adjacent Vertices:
◦ Two vertices connected by an edge.

5. Path:
◦ A sequence of vertices where each adjacent pair is connected by an edge.

6. Cycle:
◦ A path that starts and ends at the same vertex with no repetitions (except start/end).

7. Connected Graph:
◦ There is a path between every pair of vertices.

8. Disconnected Graph:
◦ Not all vertices are connected.

9. Directed Graph (Digraph):


◦ Edges have a direction (from → to).

10. Undirected Graph:


• Edges do not have direction.
11. Weighted Graph:

• Each edge has a weight/cost.

12. Subgraph:

• A portion of a graph consisting of some vertices and edges.


✅ What is MST (Minimum Spanning Tree)?

🔹 De nition:

A Minimum Spanning Tree (MST) of a connected, undirected, and weighted graph is a subset of edges that:

1. Connects all vertices (i.e., forms a spanning tree)

2. Has no cycles (tree structure)

3. Has the minimum possible total edge weight

🔹 Key Properties:

• MST contains exactly (V - 1) edges, where V is the number of vertices.

• A graph can have multiple MSTs if edge weights are not unique.

• MST is applicable only to connected graphs.

🔹 Applications:

• Network design (e.g., computer, electrical, road networks)

• Approximation algorithms for NP-hard problems

• Clustering in machine learning

• Circuit design and layout

🔹 Popular Algorithms to Find MST:

1. Kruskal’s Algorithm
◦ Sorts edges by weight

◦ Adds smallest edges without forming cycles (uses Union-Find)

2. Prim’s Algorithm
◦ Grows MST from a starting vertex by adding the minimum weight edge connected to the tree.
fi
✅ Prim’s Algorithm to Find MST

🔹 De nition:

Prim’s Algorithm is a greedy algorithm that nds the Minimum Spanning Tree (MST) of a
connected, undirected, weighted graph by growing the tree vertex by vertex, always choosing
the smallest possible edge that connects to the growing MST.

🔹 Algorithm Steps :

1. Initialize:
◦ Create a key[] array to store the minimum weight edge for each vertex. Initialize all
as ∞, except the starting vertex which is 0.

◦ Create a parent[] array to store the MST.

◦ Create a mstSet[] array to track included vertices. Initialize all as false.

2. Repeat (V - 1) times:
◦ Pick the minimum key vertex not yet included in MST.

◦ Mark it as included in mstSet[].

3. Update adjacent vertices:


◦ For all adjacent vertices not in MST:

▪ If the weight of the edge is less than the current key[] value, update:

▪ key[v] = weight of edge (u, v)

▪ parent[v] = u

4. Repeat until all vertices are included in MST.

🔹 Time and Space Complexity:

• Time Complexity:

◦ With simple arrays: O(V²)

◦ With min-heap/priority queue: O(E log V)

• Space Complexity: O(V)


fi
fi
✅ Kruskal's Algorithm to Find MST

🔹 De nition:

Kruskal’s Algorithm is a greedy algorithm that nds the Minimum Spanning Tree by:

• Sorting all edges in increasing order of their weights.

• Adding the smallest edge to the MST only if it doesn’t form a cycle (using Union-Find/Disjoin

🔹 Algorithm Steps :

1. Sort all edges in increasing order of weight.


2. Initialize MST as empty. Initialize a Disjoint Set for all vertices.
3. For each edge in sorted list:
◦ Check if adding it forms a cycle (use Union-Find).

◦ If no cycle, add it to the MST.

4. Repeat until MST has (V - 1) edges, where V is the number of vertices.

🔹 Time and Space Complexity:

• Time Complexity: O(E log E)


(Due to edge sorting and Union-Find)

• Space Complexity: O(V)


fi
fi
✅ BFS (Breadth-First Search)

🔹 De nition:

BFS is a graph traversal algorithm that explores all neighbors (adjacent vertices) of a vertex before
moving to the next level of vertices. It uses a queue (FIFO) data structure.

🔹 How BFS Works (Steps):

1. Start from the source vertex and mark it as visited.


2. Enqueue the source vertex.
3. While the queue is not empty:
◦ Dequeue a vertex v.

◦ Visit and enqueue all unvisited neighbors of v.

🔹 Data Structures Used:

• Queue: For level-wise traversal.

• Visited array: To track visited nodes.

🔹 Time and Space Complexity:

• Time Complexity: O(V + E)

• Space Complexity: O(V) (for visited array and queue)

🔹 Applications of BFS:

• Finding shortest path in unweighted graphs

• Level order traversal in trees

• Connected components in graphs

• Web crawlers, GPS systems


fi
✅ DFS (Depth-First Search) – Short Note

🔹 De nition:

DFS is a graph traversal algorithm that explores as far as possible along each branch before
backtracking. It uses a stack (either explicit or via recursion).

🔹 How DFS Works (Steps):

1. Start from the source vertex and mark it as visited.

2. Visit a neighbor that hasn't been visited.

3. Repeat step 2 until you reach a vertex with no unvisited neighbors.

4. Backtrack to the previous vertex and continue.

5. Repeat until all reachable vertices are visited.

🔹 Data Structures Used:

• Stack (explicit or recursive call stack)

• Visited array to track visited vertices

🔹 Time and Space Complexity:

• Time Complexity: O(V + E)

• Space Complexity: O(V) (due to visited array and recursion stack)

🔹 Applications of DFS:

• Cycle detection in graphs

• Topological sorting in DAGs

• Finding connected components

• Maze and puzzle solving

• Path nding (in some cases)


fi
fi

You might also like