Module - 4
● Searching :-
1. Linear Search
Description:
● Linear Search is the simplest searching algorithm where you sequentially check
each element of the list until you find the target value or reach the end of the list.
● This algorithm works on unsorted or sorted data structures.
Steps:
1. Start from the first element of the list.
2. Compare each element with the target element.
3. If a match is found, return the index or position of the element.
4. If the element is not found by the end of the list, return -1 or indicate that the element
is not present.
Time Complexity:
● Best Case: O(1)(Element is found at the first position).
● Worst Case: O(n) (Element is not found or at the last position), where nnn is the
number of elements.
●
2. Binary Search
Description:
● Binary Search is an efficient searching algorithm that works on sorted data.
● It repeatedly divides the search interval in half, comparing the target value to the
middle element. If the target is less than the middle element, it eliminates the upper
half of the list. If the target is greater, it eliminates the lower half.
Steps:
1. Start with two pointers: one pointing to the first element and one pointing to the last
element of the array.
2. Find the middle element of the array.
3. If the target is equal to the middle element, return the middle index.
4. If the target is smaller than the middle element, repeat the search in the left half.
5. If the target is larger than the middle element, repeat the search in the right half.
6. If the pointers cross, the target is not in the list.
Time Complexity:
● Best Case: O(1) (Target is at the middle element).
● Worst Case: O(logn) , where n is the number of elements in the list.
3. Hashing
Description:
● Hashing is a technique used to convert a key (such as a string or integer) into an
index of an array. It provides an efficient way to search, insert, and delete elements.
● Hashing uses a hash function that maps the key to an index in a hash table, where
each index stores a bucket or linked list of key-value pairs.
● Hashing works well for constant time complexity for lookups, but the efficiency
depends on the quality of the hash function and how well it handles collisions (when
two keys hash to the same index).
Steps:
1. Compute the hash of the key using a hash function.
2. Access the index in the hash table using the hash value.
3. If there’s a collision (multiple keys have the same hash value), handle it with
techniques like chaining (storing values in a linked list) or open addressing (finding
another open slot).
Time Complexity:
● Best Case: O(1) (No collisions, direct access).
● Worst Case: O(n) (All elements hash to the same index, forming a chain).
2) SORTING:-
1. Insertion Sort
Description:
● Insertion Sort is a simple algorithm that builds the sorted list one element at a time. It
takes each element from the unsorted part and inserts it into the correct position
within the sorted part.
Steps:
1. Start from the second element of the list.
2. Compare it with the elements before it.
3. Shift all larger elements to the right and insert the current element into its correct
position.
Time Complexity:
● Best Case: O(n) (already sorted list).
● Worst Case: O(n^2).
● Average Case: O(n^2).
Use Case: Best for small datasets or partially sorted lists.
2. Bubble Sort
Description:
● Bubble Sort repeatedly steps through the list, compares adjacent elements, and
swaps them if they are in the wrong order. This process continues until the list is
sorted.
Steps:
1. Start at the beginning of the list.
2. Compare each pair of adjacent elements.
3. Swap them if they are in the wrong order.
4. Repeat the process for the entire list until no swaps are needed.
Time Complexity:
● Best Case: O(n) (already sorted list).
● Worst Case:O(n^2).
● Average Case: O(n^2).
● Use Case: Simple, but inefficient for large datasets.
3. Selection Sort
Description:
● Selection Sort divides the list into two parts: the sorted part and the unsorted part. It
repeatedly selects the smallest (or largest) element from the unsorted part and
swaps it with the leftmost unsorted element.
● Selection Sort is a comparison-based sorting algorithm
Steps:
1. Find the smallest element in the unsorted part of the list.
2. Swap it with the leftmost unsorted element.
3. Repeat until the list is sorted.
Time Complexity:
● Best, Worst, Average Case: O(n^2).
● Use Case: Simple but inefficient for large datasets.
4. Merge Sort
Description:
● Merge Sort is a divide-and-conquer algorithm. It divides the list into two halves,
sorts them recursively, and then merges the two sorted halves to produce the sorted
list.
Steps:
1. Divide the list into two halves.
2. Recursively sort the two halves.
3. Merge the two sorted halves.
Time Complexity:
● Best, Worst, Average Case: O(n log n).
Use Case: Efficient for large datasets, particularly with linked lists.
5. Quick Sort
Description:
● Quick Sort is another divide-and-conquer algorithm. It selects a pivot element,
partitions the list around the pivot (elements less than the pivot go to the left, greater
elements go to the right), and recursively sorts the sublists.
Steps:
1. Select a pivot element.
2. Partition the list around the pivot.
3. Recursively sort the sublists.
Time Complexity:
● Best, Average Case: O(nlogn).
● Worst Case: O(n^2)(when the pivot is the smallest or largest element).
Use Case: Often faster in practice than merge sort due to its smaller constant factors.
6. Heap Sort
Description:
● Heap Sort uses a heap (a complete binary tree) to sort elements. It first builds a max
heap (where the largest element is at the root), then repeatedly extracts the
maximum element and rebuilds the heap.
Steps:
1. Build a max-heap from the list.
2. Swap the root of the heap with the last element.
3. Reduce the heap size and heapify the root.
4. Repeat until the heap size is reduced to 1.
Time Complexity:
● Best, Worst, Average Case: O(nlogn)O(n \log n)O(nlogn).
Use Case: Efficient for large datasets, and it works in-place.
7. Radix Sort
Description:
● Radix Sort is a non-comparative sorting algorithm that sorts numbers digit by digit
starting from the least significant digit (LSD) or most significant digit (MSD), using a
stable sub-sorting algorithm like counting sort.
Time Complexity:
● Best, Worst, Average Case: O(nk)O(nk)O(nk), where nnn is the number of elements
and kkk is the number of digits.
Use Case: Efficient when the range of numbers (i.e., the number of digits) is small.
8. Bucket Sort
Description:
● Bucket Sort divides the range of input values into smaller intervals (buckets), sorts
the elements within each bucket (often using another sorting algorithm), and then
concatenates the sorted buckets.
Time Complexity:
● Best Case: O(n+k)O(n + k)O(n+k), where nnn is the number of elements and kkk is
the number of buckets.
● Worst Case: O(n2)O(n^2)O(n2) (when all elements fall into one bucket).
Use Case: Useful for uniformly distributed data.
Comparison of Sorting Algorithms:
Algorith Time Space Best For
m Complexity Complexity
Insertion O(n2)O(n^2) O(1)O(1)O(1 Small datasets, nearly sorted
Sort O(n2) ) data
Bubble O(n2)O(n^2) O(1)O(1)O(1 Simple but inefficient for large
Sort O(n2) ) data
Selection O(n2)O(n^2) O(1)O(1)O(1 Small datasets
Sort O(n2) )
Merge O(nlogn)O(n O(n)O(n)O(n Large datasets, stable sorting
Sort \log )
n)O(nlogn)
Quick O(nlogn)O(n O(logn)O(\lo Large datasets, average case
Sort \log g n)O(logn) faster than Merge Sort
n)O(nlogn)
Heap O(nlogn)O(n O(1)O(1)O(1 In-place sorting
Sort \log )
n)O(nlogn)
Radix O(nk)O(nk)O O(n+k)O(n + Integer data, numbers with few
Sort (nk) k)O(n+k) digits
Bucket O(n+k)O(n + O(n)O(n)O(n Uniformly distributed data
Sort k)O(n+k) )