Unit 1 section B.
a) Discuss the asymptotic notations with diagram.
Asymptotic Notations are mathematical tools used to represent the time and space complexity
of algorithms. They describe the behavior of a function as the input size becomes very large.
Common notations include:
   1. Big-O Notation (O): Represents the upper bound of an algorithm's running time.
       O(f(n))={g(n) ∣ ∃ c>0, n0≥0, g(n)≤c⋅f(n) ∀ n≥n0}.O(f(n)) = \{ g(n) \, | \, \exists \, c > 0, \,
       n_0 \geq 0, \, g(n) \leq c \cdot f(n) \, \forall \, n \geq n_0 \}.O(f(n))={g(n)∣∃c>0,n0
       ≥0,g(n)≤c⋅f(n)∀n≥n0}.
   2. Omega Notation (Ω): Represents the lower bound of an algorithm's running time.
       Ω(f(n))={g(n) ∣ ∃ c>0, n0≥0, g(n)≥c⋅f(n) ∀ n≥n0}.\Omega(f(n)) = \{ g(n) \, | \, \exists \, c >
       0, \, n_0 \geq 0, \, g(n) \geq c \cdot f(n) \, \forall \, n \geq n_0 \}.Ω(f(n))={g(n)∣∃c>0,n0
       ≥0,g(n)≥c⋅f(n)∀n≥n0}.
   3. Theta Notation (Θ): Represents the tight bound of an algorithm's running time.
       Θ(f(n))={g(n) ∣ ∃ c1,c2>0, n0≥0, c1⋅f(n)≤g(n)≤c2⋅f(n) ∀ n≥n0}.\Theta(f(n)) = \{ g(n) \, | \,
       \exists \, c_1, c_2 > 0, \, n_0 \geq 0, \, c_1 \cdot f(n) \leq g(n) \leq c_2 \cdot f(n) \, \forall
       \, n \geq n_0 \}.Θ(f(n))={g(n)∣∃c1,c2>0,n0≥0,c1⋅f(n)≤g(n)≤c2⋅f(n)∀n≥n0}.
Diagram:
      Big-O: Bounds the growth above.
      Omega: Bounds the growth below.
      Theta: Provides an exact bound.
b) Analyze and evaluate the running time complexity of selection sort in all cases.
Selection Sort Algorithm:
      Select the smallest element from the unsorted part and place it at the beginning.
Time Complexity:
   1. Best Case: O(n2)O(n^2)O(n2) (Even if the array is sorted, nested loops still execute.)
   2. Worst Case: O(n2)O(n^2)O(n2).
   3. Average Case: O(n2)O(n^2)O(n2).
Reason: The algorithm always performs n(n−1)/2n(n-1)/2n(n−1)/2 comparisons, irrespective of
the input order.
c) Write the algorithm of Insertion Sort. Sort the elements {5,3,8,1,4,6,2}\{5, 3, 8,
1, 4, 6, 2\}{5,3,8,1,4,6,2}.
Algorithm:
   1. Start with the second element; compare it with elements before it.
   2. Shift elements larger than the key.
   3. Insert the key at its correct position.
Sorted Array: {1,2,3,4,5,6,8}\{1, 2, 3, 4, 5, 6, 8\}{1,2,3,4,5,6,8}.
d) Write the algorithm of Counting Sort. Sort {2,5,3,0,2,3,0,3}\{2, 5, 3, 0, 2, 3, 0,
3\}{2,5,3,0,2,3,0,3}.
Steps:
   1. Count occurrences of each element.
   2. Calculate cumulative counts.
   3. Place elements in the sorted order using cumulative counts.
Sorted Output: {0,0,2,2,3,3,3,5}\{0, 0, 2, 2, 3, 3, 3, 5\}{0,0,2,2,3,3,3,5}.
e) Quick-sort and PARTITION operation on {36,15,40,1}\{36, 15, 40,
1\}{36,15,40,1}.
Partition Logic:
        Choose the pivot (last element).
        Rearrange such that elements < pivot are on the left, and > pivot are on the right.
Partitioned Array: {1,15,36,40}\{1, 15, 36, 40\}{1,15,36,40}.
f) Solve the recurrence T(n)=4T(n/3)+n2T(n) = 4T(n/3) + n^2T(n)=4T(n/3)+n2.
Apply the Master Theorem:
        a=4,b=3,f(n)=n2a = 4, b = 3, f(n) = n^2a=4,b=3,f(n)=n2.
        p=log ba=log 34≈1.26p = \log_b a = \log_3 4 \approx 1.26p=logba=log34≈1.26.
Since f(n)=n2f(n) = n^2f(n)=n2, 2>1.262 > 1.262>1.26, case 1 applies:
T(n)=Θ(n2).T(n) = \Theta(n^2).T(n)=Θ(n2).
g) Master Method Cases and solve T(n)=7T(n/3)+n2T(n) = 7T(n/3) +
n^2T(n)=7T(n/3)+n2.
Cases:
   1. f(n)=O(nlog ba−ϵ)f(n) = O(n^{\log_b a - \epsilon})f(n)=O(nlogba−ϵ).
   2. f(n)=Θ(nlog ba)f(n) = \Theta(n^{\log_b a})f(n)=Θ(nlogba).
   3. f(n)=Ω(nlog ba+ϵ)f(n) = \Omega(n^{\log_b a + \epsilon})f(n)=Ω(nlogba+ϵ).
For a=7,b=3,f(n)=n2a = 7, b = 3, f(n) = n^2a=7,b=3,f(n)=n2, p=log   37≈1.77p = \log_3 7
\approx 1.77p=log37≈1.77. Here 2<1.772 < 1.772<1.77:
T(n)=Θ(n2).T(n) = \Theta(n^2).T(n)=Θ(n2).
h) Substitution Method: Solve T(n)=T(n/2)+1T(n) = T(n/2) + 1T(n)=T(n/2)+1.
Assume T(n)=clog n+dT(n) = c \log n + dT(n)=clogn+d. Verify:
T(n)=log n+c.T(n) = \log n + c.T(n)=logn+c.
i) Solve T(n)=4T(n/2)+n2 T(n) = 4T(n/2) + n^2T(n)=4T(n/2)+n2 using Master
Method.
a=4,b=2,f(n)=n2,p=2a = 4, b = 2, f(n) = n^2, p = 2a=4,b=2,f(n)=n2,p=2. Since
f(n)=Θ(n2log n)f(n) = \Theta(n^2 \log n)f(n)=Θ(n2logn), case 3 applies:
T(n)=Θ(n2log n).T(n) = \Theta(n^2 \log n).T(n)=Θ(n2logn).
j) Iteration Method: Solve T(n)=c+3T(n−1)T(n) = c + 3T(n-1)T(n)=c+3T(n−1).
Expand the recurrence:
T(n)=c+3(c)+32(c)+…+3n−1(T(1)).T(n) = c + 3(c) + 3^2(c) + \ldots + 3^{n-
1}(T(1)).T(n)=c+3(c)+32(c)+…+3n−1(T(1)).
Simplify to:
T(n)=Θ(3n).T(n) = \Theta(3^n).T(n)=Θ(3n).
4o
UNIT-II, SECTION-B questions:
a) Discuss R-B tree insertion cases.
In Red-Black Tree (R-B Tree), insertion follows these steps to maintain properties:
   1. Case 1: New Node is the Root
         o Color the new node black.
   2. Case 2: Parent is Black
         o No adjustments needed.
   3. Case 3: Parent is Red & Uncle is Red
         o Recolor parent and uncle to black and grandparent to red.
         o Move up the tree to fix violations.
   4. Case 4: Parent is Red & Uncle is Black (Triangle)
         o Perform rotation (left or right) to convert into a linear configuration.
         o Proceed to Case 5.
   5. Case 5: Parent is Red & Uncle is Black (Line)
         o Perform rotation at the grandparent to resolve violation.
         o Recolor appropriately.
b) Create an R-B tree by inserting elements 10,18,7,15,16,30,25,40,60,2,1,7010,
18, 7, 15, 16, 30, 25, 40, 60, 2, 1, 7010,18,7,15,16,30,25,40,60,2,1,70.
Steps to construct the R-B Tree:
   1.   Start with 101010 (Root, Black).
   2.   Insert 181818 (Red).
   3.   Insert 777 (Red) → Case 3: Recolor 777 & 181818, 101010 becomes Red.
   4.   Continue inserting following the R-B rules.
Diagram: The resultant tree will be balanced with appropriate rotations and color adjustments.
(You can request a visualization if needed.)
c) Prove height of B-tree: h≤log      t((n+1)/2)h \leq \log_t((n+1)/2)h≤logt((n+1)/2).
   1. Minimum number of keys at height hhh:
      nmin=2th−1−1n_{\text{min}} = 2t^{h-1} - 1nmin=2th−1−1 for a B-tree of degree ttt.
   2. Solving for height hhh:
      h≤log t((n+1)/2)h \leq \log_t((n+1)/2)h≤logt((n+1)/2) using nmin≤nn_{\text{min}} \leq
      nnmin≤n.
d) Delete E,F,ME, F, ME,F,M from a B-tree (degree t=3t = 3t=3).
Steps:
   1. Locate the keys E,F,ME, F, ME,F,M.
   2. Adjust nodes using inorder predecessor or inorder successor.
   3. Rebalance by borrowing from siblings or merging nodes.
Result: A balanced B-tree without E,F,ME, F, ME,F,M.
e) Explain insertion in B-tree.
Steps:
   1. Traverse to the correct leaf position.
   2. Insert the key.
   3. If the node overflows:
           o Split the node into two.
           o Promote the middle key to the parent.
   4. Repeat until the tree is balanced.
f) Comparison: R-B Tree vs. B-Tree
       Feature           Red-Black Tree                 B-Tree
Structure           Binary Search Tree       Multi-way Tree
Height              O(log n)O(\log n)O(logn) O(log n)O(\log n)O(logn)
Balancing Mechanism Recoloring, Rotations    Splitting, Merging
Use Cases           In-memory data structure Disk-based data storage
g) Union of Two Binomial Heaps
Algorithm:
   1. Merge root lists of both heaps.
   2. Pair trees of the same degree using linking.
   3. Ensure heap property by adjusting roots.
Time Complexity: O(log n)O(\log n)O(logn), where nnn is the total number of nodes.
h) Algorithm for Extract-Min Key from Binomial Heap
Steps:
   1.    Identify the tree with the smallest root.
   2.    Remove the root and retrieve its children.
   3.    Merge the child trees with the original heap.
   4.    Reorganize to maintain binomial heap properties.
Time Complexity: O(log n)O(\log n)O(logn).
i) Decrease Key in Fibonacci Heap
Steps:
   1. Decrease the key of the node.
   2. If heap property is violated:
          o Cut the node and its subtree.
          o Add the node to the root list.
   3. Perform cascading cuts if parent loses a second child.
Example: Decreasing a key in a Fibonacci heap with cascading cuts ensures the heap remains
valid.
j) Operations on Fibonacci Heap
   1.    Insert: Add to root list (O(1)O(1)O(1)).
   2.    Minimum: Return root with smallest key (O(1)O(1)O(1)).
   3.    Extract Min: Remove the smallest root and restructure (O(log n)O(\log n)O(logn)).
   4.    Decrease Key: Reduce the value of a key (O(1)O(1)O(1) amortized).
     5. Delete: Decrease the key to −∞-\infty−∞ and extract min (O(log      n)O(\log n)O(logn)).
     6. Union: Merge two heaps (O(1)O(1)O(1)).
4o
UNIT-III, SECTION-B questions:
a) Strassen’s Matrix Multiplication
Strassen’s Algorithm:
        Reduces the multiplication of two n×nn \times nn×n matrices into seven recursive
         multiplications of smaller matrices (instead of eight in the standard algorithm).
Formula for Product:
     1. Divide AAA and BBB into submatrices.
     2. Compute intermediate products (M1,M2,…,M7M1, M2, \ldots, M7M1,M2,…,M7).
     3. Combine submatrices to get the product.
Example: Given matrices AAA and BBB, apply Strassen’s formulas to compute ABABAB.
b) Graham’s Scan Method for Convex Hull
Steps:
     1. Find the point with the lowest y-coordinate (pivot).
     2. Sort all points based on polar angle with respect to the pivot.
     3. Traverse sorted points:
           o Use the cross product to determine whether to turn left, right, or stay collinear.
           o Include points that make a left turn, discard others.
Output: The vertices of the Convex Hull in counterclockwise order.
c) Optimization Problem and Greedy Method
Optimization Problem:
        A problem where the goal is to find the best solution among a set of feasible solutions.
Greedy Method:
   1. Make a locally optimal choice at each step.
   2. Ensure choices lead to a globally optimal solution.
Example: Activity Selection, Knapsack Problem.
d) Activity Selection Problem
Activity Selection:
        Given nnn activities with start and finish times, maximize the number of non-overlapping
         activities.
Steps:
   1. Sort activities by finish time.
   2. Select the first activity, then iteratively choose the next activity whose start time is after
      the previous finish time.
e) Compute Activity Schedule
Given:
        S={1,2,3,4,7,8,9,9,11,12}S = \{1, 2, 3, 4, 7, 8, 9, 9, 11, 12\}S={1,2,3,4,7,8,9,9,11,12}.
        F={3,5,4,7,10,9,11,13,12,14}F = \{3, 5, 4, 7, 10, 9, 11, 13, 12,
         14\}F={3,5,4,7,10,9,11,13,12,14}.
Solution:
   1. Sort activities by finish times.
   2. Apply the selection procedure.
Result: Optimal schedule includes {A1,A3,A6,A9,A10}\{A1, A3, A6, A9,
A10\}{A1,A3,A6,A9,A10}.
f) Knapsack Problem
Given:
        Weights = {100,50,40,20,10,10}\{100, 50, 40, 20, 10, 10\}{100,50,40,20,10,10}.
        Values = {40,35,20,4,10,6}\{40, 35, 20, 4, 10, 6\}{40,35,20,4,10,6}.
        Capacity = 100.
Solution:
   1. Sort by value/weight ratio.
   2. Add items to the knapsack until capacity is exhausted.
Optimal Solution: Use {B,C,E,F}\{B, C, E, F\}{B,C,E,F} for a total value of
35+20+10+6=7135 + 20 + 10 + 6 = 7135+20+10+6=71.
g) Prim’s Algorithm
Algorithm:
   1. Start with any node.
   2. Add the smallest edge connecting the tree to a new node.
   3. Repeat until all nodes are included.
Time Complexity: O(Elog V)O(E \log V)O(ElogV) using a priority queue.
h) Kruskal’s Algorithm
Steps:
   1. Sort edges by weight.
   2. Add the smallest edge to the tree, avoiding cycles.
   3. Repeat until all vertices are connected.
Time Complexity: O(Elog E)O(E \log E)O(ElogE).
i) Dijkstra’s Algorithm
Steps:
   1.    Set all distances to infinity, source distance to 0.
   2.    Use a priority queue to pick the node with the smallest distance.
   3.    Update distances to its neighbors.
   4.    Repeat until all nodes are processed.
Complexity: O(V2)O(V^2)O(V2) (or O(E+Vlog             V)O(E + V \log V)O(E+VlogV) with a
priority queue).
j) Bellman-Ford Algorithm
Steps:
   1. Initialize distances: source = 0, others = infinity.
   2. Relax all edges (V−1)(V-1)(V−1) times.
   3. Check for negative-weight cycles.
Given Graph: Apply the algorithm with source AAA. Calculate distances and detect negative
cycles if present.
UNIT-IV, SECTION-B Answers:
a) 0/1 Knapsack Problem
The 0/1 Knapsack problem requires selecting items to maximize profit while staying within the
weight capacity. Items cannot be divided.
Dynamic Programming Solution: Given:
Knapsack Capacity = 10
Profit = {1,6,18,22,28}\{1, 6, 18, 22, 28\}{1,6,18,22,28}
Weight = {1,2,5,6,7}\{1, 2, 5, 6, 7\}{1,2,5,6,7}.
   1. Define dp[i][w]dp[i][w]dp[i][w]: Maximum profit using the first iii items with weight
      www.
   2. Recurrence:
      dp[i][w]=max (dp[i−1][w],dp[i−1][w−wt[i]]+profit[i])dp[i][w] = \max(dp[i-1][w], dp[i-
      1][w-wt[i]] + profit[i])dp[i][w]=max(dp[i−1][w],dp[i−1][w−wt[i]]+profit[i]).
   3. Build the table iteratively.
Result: Use the table to extract the optimal solution.
b) Warshall’s and Floyd’s Algorithms
Warshall’s Algorithm:
        Finds the transitive closure of a graph (reachability matrix).
        Iteratively updates the adjacency matrix:
         R[k][i][j]=R[k−1][i][j]∨(R[k−1][i][k]∧R[k−1][k][j])R[k][i][j] = R[k-1][i][j] \lor (R[k-
         1][i][k] \land R[k-1][k][j])R[k][i][j]=R[k−1][i][j]∨(R[k−1][i][k]∧R[k−1][k][j]).
Floyd’s Algorithm:
        Finds all pairs shortest paths.
        Update distances iteratively:
         D[k][i][j]=min (D[k−1][i][j],D[k−1][i][k]+D[k−1][k][j])D[k][i][j] = \min(D[k-1][i][j],
         D[k-1][i][k] + D[k-1][k][j])D[k][i][j]=min(D[k−1][i][j],D[k−1][i][k]+D[k−1][k][j]).
Time Complexity: Both algorithms run in O(V3)O(V^3)O(V3).
c) Algorithm for Longest Common Subsequence (LCS)
Steps:
   1. Define dp[i][j]dp[i][j]dp[i][j]: Length of LCS for substrings X[0..i−1]X[0..i-1]X[0..i−1]
      and Y[0..j−1]Y[0..j-1]Y[0..j−1].
   2. Recurrence:
      dp[i][j]=dp[i−1][j−1]+1dp[i][j] = dp[i-1][j-1] + 1dp[i][j]=dp[i−1][j−1]+1 (if
      X[i−1]==Y[j−1]X[i-1] == Y[j-1]X[i−1]==Y[j−1]),
      else dp[i][j]=max (dp[i−1][j],dp[i][j−1])dp[i][j] = \max(dp[i-1][j], dp[i][j-
      1])dp[i][j]=max(dp[i−1][j],dp[i][j−1]).
   3. Trace back to reconstruct the LCS.
d) 8-Queen Problem using Backtracking
The 8-Queen problem places 8 queens on a chessboard such that no two queens threaten each
other.
Backtracking Algorithm:
   1. Place a queen in the first row.
   2. Recursively place queens in subsequent rows, ensuring no threats.
   3. If a solution is found, backtrack to find others.
Solutions:
        Example placements:
            o (1,1), (2,5), (3,8), (4,6), (5,3), (6,7), (7,2), (8,4).
            o Additional solutions can be found iteratively.
e) Optimal Coloring of a Graph
Graph Coloring:
        Assign colors to vertices such that no two adjacent vertices share the same color.
        Optimal Coloring minimizes the number of colors used.
Example:
Graph: V={1,2,3},E={(1,2),(2,3),(1,3)}V = \{1, 2, 3\}, E = \{(1,2), (2,3),
(1,3)\}V={1,2,3},E={(1,2),(2,3),(1,3)}.
Chromatic Number = 3.
f) Hamiltonian Cycle
A Hamiltonian Cycle visits each vertex exactly once and returns to the start.
Example: Graph: V={A,B,C,D},E={(A,B),(B,C),(C,D),(D,A),(B,D)}V = \{A, B, C, D\}, E =
\{(A,B), (B,C), (C,D), (D,A), (B,D)\}V={A,B,C,D},E={(A,B),(B,C),(C,D),(D,A),(B,D)}.
Cycle: A→B→D→C→AA \to B \to D \to C \to AA→B→D→C→A.
g) Subset Sum Problem
Given:
        n=4,Sum=13,W={3,4,5,6}n = 4, \text{Sum} = 13, W = \{3, 4, 5,
         6\}n=4,Sum=13,W={3,4,5,6}.
Backtracking:
   1. Explore subsets of weights.
   2. Add weights, backtrack when the sum exceeds 131313.
   3. Find valid subsets: {3,4,6}\{3, 4, 6\}{3,4,6}.
h) Backtracking vs. Branch & Bound
      Feature              Backtracking          Branch & Bound
Nature                  Recursive exploration Tree-based pruning
       Feature           Backtracking        Branch & Bound
Optimality Guarantee Not guaranteed       Guaranteed
Strategy             Depth-first search   Breadth-first search
Use Cases            N-Queens, Subset Sum TSP, Knapsack Problem
i) Traveling Salesman Problem (TSP)
TSP:
      Find the shortest route visiting all cities exactly once and returning to the start.
Approach:
      Dynamic Programming (Held-Karp Algorithm):
       Use subsets and recursively compute shortest paths.
       Complexity: O(n2⋅2n)O(n^2 \cdot 2^n)O(n2⋅2n).
j) Fractional Knapsack Problem
Given:
Weights W={100,50,40,20,10,10}W = \{100, 50, 40, 20, 10, 10\}W={100,50,40,20,10,10},
Values V={40,35,20,4,10,6}V = \{40, 35, 20, 4, 10, 6\}V={40,35,20,4,10,6},
Capacity = 50.
Solution:
   1. Sort items by value/weight\text{value/weight}value/weight ratio.
   2. Add items fractionally until capacity is filled.
Optimal Solution: Use 505050 units from W[2]W[2]W[2]. Value = 353535.
UNIT-V, SECTION-B Answers:
a) Define P & NP Problems and Discuss Their Significance
      P (Polynomial time): Class of problems that can be solved in polynomial time by a
       deterministic machine.
      NP (Nondeterministic Polynomial time): Class of problems whose solutions can be
       verified in polynomial time.
Significance:
       Understanding PPP vs NPNPNP helps identify problems solvable efficiently.
       The question P=NPP = NPP=NP is fundamental in computer science, determining if
        problems that can be verified quickly can also be solved quickly.
b) Algorithms for String Matching
Common Algorithms:
   1.   Naïve String Matching
   2.   Rabin-Karp Algorithm
   3.   Knuth-Morris-Pratt (KMP) Algorithm
   4.   Boyer-Moore Algorithm
Example Algorithm: Naïve String Matching
Iteratively compare substrings of text with the pattern.
c) Naïve String Matching Algorithm
Algorithm:
   1. For each position iii in the text:
           o Compare the substring T[i…i+m−1]T[i \ldots i+m-1]T[i…i+m−1] with the
              pattern P[0…m−1]P[0 \ldots m-1]P[0…m−1].
   2. If they match, report iii as a valid position.
Time Complexity:
       Best Case: O(n)O(n)O(n) (if no matches).
       Worst Case: O(nm)O(nm)O(nm) (if many matches).
d) Rabin-Karp Algorithm
Key Idea:
       Use a hash function to match substrings efficiently.
Algorithm:
   1. Compute hash values for the pattern and initial substring of text.
   2. Slide the pattern across the text, recompute hash values, and compare.
Significance:
        Efficient for multiple pattern matching.
        Reduces unnecessary comparisons.
e) Rabin-Karp Algorithm Explanation and Complexity
Steps:
   1. Compute h(P)h(P)h(P) for the pattern PPP and h(T[0…m−1])h(T[0 \ldots m-
      1])h(T[0…m−1]) for the first substring.
   2. Slide the pattern:
      h(T[i…i+m−1])=h(T[i−1…i+m−2])h(T[i \ldots i+m-1]) = h(T[i-1 \ldots i+m-
      2])h(T[i…i+m−1])=h(T[i−1…i+m−2]) updated incrementally.
   3. If hashes match, compare characters.
Time Complexity:
        Average: O(n+m)O(n + m)O(n+m).
        Worst: O(nm)O(nm)O(nm) (due to hash collisions).
f) Knuth-Morris-Pratt (KMP) Algorithm
Key Idea:
        Preprocess the pattern to avoid redundant comparisons.
Steps:
   1. Construct a prefix table (lps[]lps[]lps[]) for the pattern.
   2. Match the pattern using the prefix table to skip unnecessary comparisons.
Time Complexity:
        Preprocessing: O(m)O(m)O(m).
        Matching: O(n)O(n)O(n).
g) NP-Hard vs NP-Complete
  Aspect                   NP-Hard                             NP-Complete
Definition Problems as hard as NP-complete problems. Problems in both NP and NP-Hard.
Membership May not belong to NP.                     Must belong to NP.
Example    TSP (decision version).                   SAT problem.
h) Approximation Algorithm and Ratio
Approximation Algorithm:
      Provides near-optimal solutions to optimization problems in polynomial time.
Approximation Ratio:
      R=Cost of Approximation SolutionCost of Optimal SolutionR = \frac{\text{Cost of
       Approximation Solution}}{\text{Cost of Optimal
       Solution}}R=Cost of Optimal SolutionCost of Approximation Solution.
i) Introduction to Randomized Algorithms
Randomized Algorithm:
      Makes random choices during execution to achieve faster or simpler solutions.
      Types: Monte Carlo (approximation) and Las Vegas (exact).
Example: Randomized Quick Sort, Primality Testing.
j) Randomized Quick Sort
Algorithm:
   1. Choose a pivot randomly.
   2. Partition the array into elements less than and greater than the pivot.
   3. Recursively sort the partitions.
Time Complexity:
      Best/Average: O(nlog n)O(n \log n)O(nlogn).
      Worst: O(n2)O(n^2)O(n2) (rare with random pivot selection).