Algorithm Analysis Framework
The Algorithm Analysis Framework is a structured approach used to evaluate the performance of algorithms
based on various factors such as execution time, memory usage, and scalability. The analysis helps developers
choose the most efficient algorithm for a given problem.
Key Aspects of Algorithm Analysis
1. Correctness
o Ensures the algorithm produces the correct output for all valid inputs.
o Typically proven using mathematical induction, invariants, or test cases.
2. Efficiency
o Measures how well an algorithm uses resources such as time (CPU cycles) and space
(memory).
o Efficiency is evaluated using two primary metrics:
Time Complexity
Space Complexity
3. Scalability
o Examines how the algorithm performs as the input size grows. This ensures the algorithm can
handle large datasets.
4. Robustness
o Tests how the algorithm behaves with edge cases, unexpected inputs, or hardware constraints.
Types of Algorithm Analysis
1. Prior Analysis (Theoretical Analysis)
Performed before implementation.
Focuses on:
o Time Complexity: Estimated number of operations.
o Space Complexity: Estimated memory usage.
2. Posterior Analysis (Empirical Analysis)
Performed after implementation.
Involves running the algorithm on a machine with specific inputs to measure actual performance
(execution time and memory usage).
Steps in Algorithm Analysis Framework
1. Define Input Size
Input size (nn) is the primary variable representing the number of elements in the input.
o Example: For sorting algorithms, nn is the number of elements in the list.
2. Define Operations
Identify the key operations (dominant operations) that significantly impact execution time.
o Example: Comparisons in sorting, multiplications in matrix operations.
3. Calculate Time Complexity
Time complexity is expressed using asymptotic notations:
o OO-notation (Big-O): Worst-case time complexity.
o Ω\Omega-notation: Best-case time complexity.
o Θ\Theta-notation: Average-case time complexity.
4. Analyze Space Complexity
Space complexity includes:
o Input data storage.
o Additional memory used during execution (e.g., recursion stack, auxiliary arrays).
5. Evaluate Edge Cases
Test the algorithm with:
o Minimum and maximum inputs.
o Empty inputs.
o Special cases (e.g., already sorted arrays for sorting algorithms).
Asymptotic Notations
1. Big-O Notation (O(f(n))O(f(n)))
o Describes the upper bound of the time complexity.
o Indicates the worst-case scenario.
o Example: O(n2)O(n^2) means the algorithm’s time grows at most quadratically with input
size.
2. Omega (Ω(f(n))\Omega(f(n)))
o Describes the lower bound of time complexity.
o Indicates the best-case scenario.
3. Theta (Θ(f(n))\Theta(f(n)))
o Describes the exact bound.
o Indicates the average-case scenario.
Common Time Complexities
Time Complexity Example Algorithm Performance
O(1)O(1) Accessing an array element Excellent (constant time).
O(log n)O(\log n) Binary search Very efficient (logarithmic time).
O(n)O(n) Linear search Good for small datasets.
O(nlog n)O(n \log n) Merge sort, Quick sort Efficient for large datasets.
O(n2)O(n^2) Bubble sort, Selection sort Poor for large datasets.
O(2n)O(2^n) Recursive Fibonacci calculation Very inefficient (exponential time).
Example: Sorting Algorithm Analysis
Algorithm: Bubble Sort
1. Time Complexity
o Worst-case: O(n2)O(n^2) (Array is sorted in reverse order).
o Best-case: O(n)O(n) (Array is already sorted).
o Average-case: O(n2)O(n^2).
2. Space Complexity
o Space usage: O(1)O(1) (In-place sorting).
3. Analysis
o Poor scalability for large inputs due to quadratic time complexity.
o Simple to implement but inefficient compared to algorithms like Merge Sort.
Practical Considerations in Algorithm Analysis
1. Real-world Constraints
o Hardware limitations (CPU, RAM).
o Input data characteristics (e.g., sparse matrices, sorted arrays).
2. Trade-offs
o Time vs. space trade-off: Faster algorithms often use more memory (e.g., Dynamic
Programming).
3. Implementation Overheads
o Recursive algorithms may introduce stack overhead.
o Constant factors in OO-notation can impact real-world performance.
Asymptotic Notations
Asymptotic notations are mathematical tools used to describe the running time or space requirements of an
algorithm in terms of input size (nn) as it grows towards infinity. They provide a way to express the efficiency
of an algorithm without worrying about hardware or implementation details.
Types of Asymptotic Notations
1. Big-O Notation (O(f(n))O(f(n)))
Describes the upper bound of an algorithm's running time.
Represents the worst-case performance.
Example: If T(n)≤c⋅f(n)T(n) \leq c \cdot f(n) for n≥n0n \geq n_0, then T(n)=O(f(n))T(n) = O(f(n)).
Example
Algorithm: Linear Search
o Worst-case: Searching for an element not in the array.
o Time Complexity: O(n)O(n), where nn is the number of elements.
2. Omega Notation (Ω(f(n))\Omega(f(n)))
Describes the lower bound of an algorithm's running time.
Represents the best-case performance.
Example: If T(n)≥c⋅f(n)T(n) \geq c \cdot f(n) for n≥n0n \geq n_0, then T(n)=Ω(f(n))T(n) =
\Omega(f(n)).
Example
Algorithm: Linear Search
o Best-case: The target element is the first element.
o Time Complexity: Ω(1)\Omega(1).
3. Theta Notation (Θ(f(n))\Theta(f(n)))
Describes the tight bound of an algorithm's running time.
Indicates that the running time grows asymptotically at the same rate for the best-case, worst-case, and
average-case.
Example: If c1⋅f(n)≤T(n)≤c2⋅f(n)c_1 \cdot f(n) \leq T(n) \leq c_2 \cdot f(n) for n≥n0n \geq n_0, then
T(n)=Θ(f(n))T(n) = \Theta(f(n)).
Example
Algorithm: Traversing an array to calculate the sum of its elements.
o Time Complexity: Θ(n)\Theta(n).
4. Little-o Notation (o(f(n))o(f(n)))
Describes a strict upper bound.
Indicates that the growth rate of T(n)T(n) is strictly less than f(n)f(n) as nn grows.
Example
T(n)=2nT(n) = 2n is o(n2)o(n^2) because T(n)T(n) grows slower than n2n^2.
5. Little-Omega Notation (ω(f(n))\omega(f(n)))
Describes a strict lower bound.
Indicates that the growth rate of T(n)T(n) is strictly greater than f(n)f(n) as nn grows.
Example
T(n)=n3T(n) = n^3 is ω(n2)\omega(n^2) because T(n)T(n) grows faster than n2n^2.
Basic Efficiency Classes
Efficiency classes categorize algorithms based on their growth rates. They are listed in order of increasing time
complexity:
1. Constant Time (O(1)O(1))
o The running time is independent of the input size.
o Example: Accessing an element in an array by index.
2. def constant_function(array, index):
3. return array[index] # O(1)
4. Logarithmic Time (O(log n)O(\log n))
o The running time grows logarithmically with the input size.
o Example: Binary search in a sorted array.
5. def binary_search(array, target):
6. left, right = 0, len(array) - 1
7. while left <= right:
8. mid = (left + right) // 2
9. if array[mid] == target:
10. return mid
11. elif array[mid] < target:
12. left = mid + 1
13. else:
14. right = mid - 1
15. return -1 # O(log n)
16. Linear Time (O(n)O(n))
o The running time grows linearly with the input size.
o Example: Finding the maximum value in an array.
17. def find_max(array):
18. max_value = array[0]
19. for value in array:
20. if value > max_value:
21. max_value = value
22. return max_value # O(n)
23. Linearithmic Time (O(nlog n)O(n \log n))
o The running time grows as nlog nn \log n.
o Example: Merge Sort, Quick Sort (average-case).
24. Quadratic Time (O(n2)O(n^2))
o The running time grows quadratically with the input size.
o Example: Bubble Sort, Selection Sort.
25. def bubble_sort(array):
26. n = len(array)
27. for i in range(n):
28. for j in range(0, n-i-1):
29. if array[j] > array[j+1]:
30. array[j], array[j+1] = array[j+1], array[j]
31. return array # O(n^2)
32. Cubic Time (O(n3)O(n^3))
o The running time grows cubically with the input size.
o Example: Matrix multiplication.
33. Exponential Time (O(2n)O(2^n))
o The running time doubles with each additional input.
o Example: Solving the Tower of Hanoi problem recursively.
34. def tower_of_hanoi(n, source, target, auxiliary):
35. if n == 1:
36. print(f"Move disk from {source} to {target}")
37. return
38. tower_of_hanoi(n-1, source, auxiliary, target)
39. print(f"Move disk from {source} to {target}")
40. tower_of_hanoi(n-1, auxiliary, target, source) # O(2^n)
41. Factorial Time (O(n!)O(n!))
o The running time grows factorially with the input size.
o Example: Generating all permutations of a set.
Example of Algorithm Analysis
Problem: Sorting a List
Algorithms
1. Bubble Sort: O(n2)O(n^2)
o Compare every pair of elements and swap if needed.
o Inefficient for large datasets.
2. Merge Sort: O(nlog n)O(n \log n)
o Recursively divide the array and merge sorted subarrays.
o More efficient for larger datasets.
Comparison
For n=1000n = 1000:
o Bubble Sort takes approximately 1,000,0001,000,000 comparisons.
o Merge Sort takes approximately 10,00010,000 comparisons.
Analysis of Non-Recursive and Recursive Algorithms
When analyzing algorithms, one must consider their structure, implementation, and performance. Recursive and
non-recursive algorithms differ fundamentally in their approach, which impacts their efficiency in terms of time
complexity, space complexity, and implementation.
Non-Recursive Algorithms
A non-recursive algorithm solves a problem iteratively, often using loops. It avoids function calls to itself,
relying instead on constructs like for or while loops for repetition.
Characteristics
1. Uses loops for repetition.
2. Requires less memory since it doesn't involve function call overhead.
3. Easier to debug and less prone to stack overflow errors.
4. May require additional data structures like stacks or queues to mimic recursion.
Example: Factorial Calculation (Non-Recursive)
def factorial_iterative(n):
result = 1
for i in range(1, n + 1):
result *= i
return result
Analysis
Time Complexity: O(n)O(n) (Iterates nn times).
Space Complexity: O(1)O(1) (No additional space required except for variables).
Recursive Algorithms
A recursive algorithm solves a problem by breaking it into smaller subproblems and calling itself on these
subproblems until a base case is reached.
Characteristics
1. Uses function calls to itself.
2. Requires more memory due to function call stack overhead.
3. Simplifies problems with divide-and-conquer or repetitive structures.
4. Base cases are critical to avoid infinite recursion.
Example: Factorial Calculation (Recursive)
def factorial_recursive(n):
if n == 0 or n == 1: # Base case
return 1
return n * factorial_recursive(n - 1) # Recursive case
Analysis
Time Complexity: O(n)O(n) (Makes nn recursive calls).
Space Complexity: O(n)O(n) (Function call stack requires nn stack frames).
Comparison of Non-Recursive and Recursive Algorithms
Aspect Non-Recursive Recursive
Structure Iterative (loops). Function calls itself.
Memory Usage Requires less memory. Stack space used for recursion.
Ease of Implementation Simple for straightforward problems. Elegant for divide-and-conquer problems.
Debugging Easier to debug. Can be harder due to stack frames.
Efficiency No overhead from function calls. Function calls add overhead.
Detailed Analysis Example: Fibonacci Sequence
1. Recursive Algorithm
def fibonacci_recursive(n):
if n <= 1: # Base cases
return n
return fibonacci_recursive(n - 1) + fibonacci_recursive(n - 2) # Recursive case
Analysis
Time Complexity: O(2n)O(2^n)
o Every call splits into two more calls, leading to exponential growth.
Space Complexity: O(n)O(n)
o Requires stack space proportional to the depth of recursion.
2. Non-Recursive Algorithm
def fibonacci_iterative(n):
if n <= 1:
return n
a, b = 0, 1
for _ in range(2, n + 1):
a, b = b, a + b
return b
Analysis
Time Complexity: O(n)O(n)
o Iterates nn times.
Space Complexity: O(1)O(1)
o Only a few variables are used.
Optimization of Recursive Algorithm with Memoization
To improve the recursive Fibonacci algorithm, use memoization to avoid redundant calculations.
def fibonacci_memoized(n, memo={}):
if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fibonacci_memoized(n - 1, memo) + fibonacci_memoized(n - 2, memo)
return memo[n]
Analysis
Time Complexity: O(n)O(n)
o Each Fibonacci number is calculated once.
Space Complexity: O(n)O(n)
o Space is used for memoization.
Choosing Between Recursive and Non-Recursive Algorithms
Use Recursive Algorithms When:
1. Problem naturally fits a divide-and-conquer approach (e.g., Merge Sort, Quick Sort).
2. Problem involves tree-like structures (e.g., Tree traversal, DFS).
Use Non-Recursive Algorithms When:
1. Space efficiency is critical.
2. Problem can be solved iteratively without additional overhead (e.g., array traversal, iterative
Fibonacci).
Divide and Conquer: Merge Sort
Merge Sort is a classic sorting algorithm that uses the divide and conquer strategy. It divides the problem into
smaller subproblems, solves them recursively, and then merges the sorted subarrays to produce the final sorted
array.
Steps of Merge Sort
1. Divide:
o Split the array into two halves until each subarray contains one element or is empty.
2. Conquer:
o Recursively sort each subarray.
3. Combine:
o Merge the sorted subarrays to produce a single sorted array.
Algorithm for Merge Sort
1. Recursive Divide
Divide the array into two halves:
o Left half: A[0 ... mid]A[0\ ...\ \text{mid}]
o Right half: A[mid+1 ... n−1]A[\text{mid}+1\ ...\ n-1]
2. Merge Function
Compare elements of the two sorted halves.
Place the smallest element in the merged array.
Continue until all elements are merged.
Merge Sort Implementation
def merge_sort(arr):
if len(arr) <= 1: # Base case: An array with 0 or 1 elements is already sorted
return arr
# Step 1: Divide the array into two halves
mid = len(arr) // 2
left_half = merge_sort(arr[:mid]) # Recursively sort the left half
right_half = merge_sort(arr[mid:]) # Recursively sort the right half
# Step 2: Merge the two sorted halves
return merge(left_half, right_half)
def merge(left, right):
sorted_array = []
i=j=0
# Compare elements from both halves and append the smaller one
while i < len(left) and j < len(right):
if left[i] < right[j]:
sorted_array.append(left[i])
i += 1
else:
sorted_array.append(right[j])
j += 1
# Append remaining elements from left (if any)
while i < len(left):
sorted_array.append(left[i])
i += 1
# Append remaining elements from right (if any)
while j < len(right):
sorted_array.append(right[j])
j += 1
return sorted_array
Example: Step-by-Step Execution
Input Array: [38,27,43,3,9,82,10][38, 27, 43, 3, 9, 82, 10]
Step 1: Divide
Split array: [38,27,43][38, 27, 43] and [3,9,82,10][3, 9, 82, 10].
Step 2: Recursive Sorting
Left half [38,27,43][38, 27, 43]:
o Split: [38][38] and [27,43][27, 43].
o Sort [27,43][27, 43]: Split into [27][27] and [43][43] → Merge to [27,43][27, 43].
o Merge [38][38] and [27,43][27, 43] → [27,38,43][27, 38, 43].
Right half [3,9,82,10][3, 9, 82, 10]:
o Split: [3,9][3, 9] and [82,10][82, 10].
o Sort [3,9][3, 9]: Split into [3][3] and [9][9] → Merge to [3,9][3, 9].
o Sort [82,10][82, 10]: Split into [82][82] and [10][10] → Merge to [10,82][10, 82].
o Merge [3,9][3, 9] and [10,82][10, 82] → [3,9,10,82][3, 9, 10, 82].
Step 3: Final Merge
Merge [27,38,43][27, 38, 43] and [3,9,10,82][3, 9, 10, 82]:
o Compare and merge → [3,9,10,27,38,43,82][3, 9, 10, 27, 38, 43, 82].
Time Complexity
1. Divide Step:
o Dividing the array takes O(log n)O(\log n) because we repeatedly split the array in half.
2. Merge Step:
o Merging two sorted halves takes O(n)O(n) because each element is processed once.
Overall Time Complexity: O(nlog n)O(n \log n)
Space Complexity
Auxiliary Space: Requires additional space to store temporary arrays during merging.
Overall Space Complexity: O(n)O(n)
Advantages of Merge Sort
1. Efficiency: Guaranteed O(nlog n)O(n \log n) time complexity in all cases (best, worst, average).
2. Stability: Maintains the relative order of equal elements.
3. Divide and Conquer: Easy to implement recursively.
Disadvantages of Merge Sort
1. Requires extra memory for temporary arrays (O(n)O(n)).
2. Can be slower than in-place sorting algorithms (e.g., Quick Sort) for smaller datasets due to overhead.
Quick Sort
Quick Sort is a divide-and-conquer algorithm for sorting arrays. It works by selecting a "pivot" element from
the array, partitioning the other elements into two subarrays (elements smaller than the pivot and elements
greater than the pivot), and then recursively sorting the subarrays.
Steps in Quick Sort
1. Divide:
o Select a pivot element.
o Partition the array into two parts:
Elements smaller than the pivot.
Elements greater than the pivot.
2. Conquer:
o Recursively apply Quick Sort to the subarrays.
3. Combine:
o The combination step is implicit because the sorted subarrays and pivot form the final sorted
array.
Algorithm for Quick Sort
Partition Function
The partition function is responsible for placing the pivot in its correct position and ensuring all smaller
elements are to its left and all larger elements are to its right.
Quick Sort Function
Recursively sorts the subarrays.
Implementation
def quick_sort(arr):
if len(arr) <= 1: # Base case: An array of 0 or 1 element is already sorted
return arr
# Step 1: Choose a pivot (e.g., the last element)
pivot = arr[-1]
left = [x for x in arr[:-1] if x <= pivot] # Elements smaller than or equal to pivot
right = [x for x in arr[:-1] if x > pivot] # Elements greater than pivot
# Step 2: Recursively sort left and right partitions
return quick_sort(left) + [pivot] + quick_sort(right)
Example: Step-by-Step Execution
Input Array: [10,80,30,90,40,50,70][10, 80, 30, 90, 40, 50, 70]
1. Choose Pivot: Select the last element as the pivot (7070).
2. Partition:
o Elements smaller than 7070: [10,30,40,50][10, 30, 40, 50].
o Elements greater than 7070: [80,90][80, 90].
o Combine: [10,30,40,50]+[70]+[80,90][10, 30, 40, 50] + [70] + [80, 90].
3. Recursive Sorting:
o Sort [10,30,40,50][10, 30, 40, 50]:
Pivot: 5050.
Partition: [10,30,40][10, 30, 40] and [][].
Result: [10,30,40,50][10, 30, 40, 50].
o Sort [80,90][80, 90]: Already sorted.
4. Final Result:
o Combine all: [10,30,40,50,70,80,90][10, 30, 40, 50, 70, 80, 90].
Time Complexity
1. Best-Case: O(nlog n)O(n \log n)
o Occurs when the pivot divides the array into two equal halves at each step.
2. Average-Case: O(nlog n)O(n \log n)
o On average, the pivot divides the array reasonably well.
3. Worst-Case: O(n2)O(n^2)
o Occurs when the pivot is the smallest or largest element, leading to unbalanced partitions.
Space Complexity
In-Place Version: O(log n)O(\log n) (for the recursion stack).
Non-In-Place Version (above implementation): O(n)O(n) due to additional lists for left and right
partitions.
Optimized Quick Sort (In-Place)
To improve space efficiency, Quick Sort can be implemented in-place using the Lomuto Partition Scheme or
Hoare Partition Scheme.
Lomuto Partition Scheme
def quick_sort_in_place(arr, low, high):
if low < high:
# Partition the array and get the pivot index
pi = partition(arr, low, high)
# Recursively sort elements before and after the pivot
quick_sort_in_place(arr, low, pi - 1)
quick_sort_in_place(arr, pi + 1, high)
def partition(arr, low, high):
pivot = arr[high] # Choose the last element as pivot
i = low - 1 # Index of smaller element
for j in range(low, high):
if arr[j] <= pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i] # Swap elements
arr[i + 1], arr[high] = arr[high], arr[i + 1] # Place pivot in correct position
return i + 1
Advantages of Quick Sort
1. Efficiency:
o Faster in practice than Merge Sort for many inputs due to lower overhead.
2. In-Place:
o Can be implemented without additional memory.
Disadvantages of Quick Sort
1. Worst-Case Performance:
o O(n2)O(n^2) when the pivot selection is poor.
2. Unstable:
o Does not preserve the relative order of equal elements.
Comparison with Merge Sort
Aspect Quick Sort Merge Sort
O(n2)O(n^2) (worst), O(nlog n)O(n \log n)
Time Complexity O(nlog n)O(n \log n) (always)
(average)
Space
O(log n)O(\log n) (in-place) O(n)O(n) (auxiliary space)
Complexity
Stability Unstable Stable
Larger datasets, guaranteed
Best Use Case Smaller datasets, when in-place sorting is needed.
performance.
Strassen’s Matrix Multiplication
Strassen’s Matrix Multiplication is an efficient algorithm for multiplying two square matrices. It uses the
divide-and-conquer approach to reduce the time complexity compared to the standard matrix multiplication.
Traditional Matrix Multiplication
For two n×nn \times n matrices AA and BB, the traditional method involves:
Multiplying elements of rows of AA with elements of columns of BB, resulting in:
o O(n3)O(n^3) operations (scalar multiplications and additions).
Strassen’s Improvement
Strassen’s algorithm reduces the number of multiplications required by dividing the matrices into smaller
submatrices. It achieves a time complexity of approximately O(n2.81)O(n^{2.81}), which is faster than the
traditional O(n3)O(n^3).
Steps in Strassen’s Algorithm
1. Divide
Split AA and BB into four equal-sized submatrices: A=[A11A12A21A22],B=[B11B12B21B22]A =
\begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{bmatrix}, \quad B = \begin{bmatrix}
B_{11} & B_{12} \\ B_{21} & B_{22} \end{bmatrix} Each submatrix is of size n2×n2\frac{n}{2}
\times \frac{n}{2}.
2. Compute 7 Products
Using the submatrices, compute the following 7 products (M1M_1 to M7M_7):
M1=(A11+A22)(B11+B22)M_1 = (A_{11} + A_{22})(B_{11} + B_{22}) M2=(A21+A22)B11M_2 =
(A_{21} + A_{22})B_{11} M3=A11(B12−B22)M_3 = A_{11}(B_{12} - B_{22}) M4=A22(B21−B11)M_4 =
A_{22}(B_{21} - B_{11}) M5=(A11+A12)B22M_5 = (A_{11} + A_{12})B_{22}
M6=(A21−A11)(B11+B12)M_6 = (A_{21} - A_{11})(B_{11} + B_{12}) M7=(A12−A22)(B21+B22)M_7 =
(A_{12} - A_{22})(B_{21} + B_{22})
3. Combine Results
Using the 7 products, compute the final submatrices of the result matrix CC:
C11=M1+M4−M5+M7C_{11} = M_1 + M_4 - M_5 + M_7 C12=M3+M5C_{12} = M_3 + M_5
C21=M2+M4C_{21} = M_2 + M_4 C22=M1−M2+M3+M6C_{22} = M_1 - M_2 + M_3 + M_6
Algorithm
import numpy as np
def strassen_multiply(A, B):
# Base case: If the matrix is 1x1, multiply directly
if len(A) == 1:
return A * B
# Step 1: Divide matrices into quadrants
n = len(A)
mid = n // 2
A11, A12, A21, A22 = A[:mid, :mid], A[:mid, mid:], A[mid:, :mid], A[mid:, mid:]
B11, B12, B21, B22 = B[:mid, :mid], B[:mid, mid:], B[mid:, :mid], B[mid:, mid:]
# Step 2: Compute the 7 products
M1 = strassen_multiply(A11 + A22, B11 + B22)
M2 = strassen_multiply(A21 + A22, B11)
M3 = strassen_multiply(A11, B12 - B22)
M4 = strassen_multiply(A22, B21 - B11)
M5 = strassen_multiply(A11 + A12, B22)
M6 = strassen_multiply(A21 - A11, B11 + B12)
M7 = strassen_multiply(A12 - A22, B21 + B22)
# Step 3: Combine results into final quadrants
C11 = M1 + M4 - M5 + M7
C12 = M3 + M5
C21 = M2 + M4
C22 = M1 - M2 + M3 + M6
# Combine quadrants into a single matrix
C = np.vstack((np.hstack((C11, C12)), np.hstack((C21, C22))))
return C
Example
Input:
A=[1234],B=[5678]A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, \quad B = \begin{bmatrix} 5 & 6 \\ 7 &
8 \end{bmatrix}
Step 1: Divide Matrices
A11=1, A12=2, A21=3, A22=4A_{11} = 1, \, A_{12} = 2, \, A_{21} = 3, \, A_{22} = 4
B11=5, B12=6, B21=7, B22=8B_{11} = 5, \, B_{12} = 6, \, B_{21} = 7, \, B_{22} = 8
Step 2: Compute Products
M1=(A11+A22)(B11+B22)=(1+4)(5+8)=65M_1 = (A_{11} + A_{22})(B_{11} + B_{22}) = (1+4)(5+8) = 65
M2=(A21+A22)B11=(3+4)(5)=35M_2 = (A_{21} + A_{22})B_{11} = (3+4)(5) = 35
M3=A11(B12−B22)=(1)(6−8)=−2M_3 = A_{11}(B_{12} - B_{22}) = (1)(6-8) = -2
M4=A22(B21−B11)=(4)(7−5)=8M_4 = A_{22}(B_{21} - B_{11}) = (4)(7-5) = 8
M5=(A11+A12)B22=(1+2)(8)=24M_5 = (A_{11} + A_{12})B_{22} = (1+2)(8) = 24
M6=(A21−A11)(B11+B12)=(3−1)(5+6)=22M_6 = (A_{21} - A_{11})(B_{11} + B_{12}) = (3-1)(5+6) = 22
M7=(A12−A22)(B21+B22)=(2−4)(7+8)=−30M_7 = (A_{12} - A_{22})(B_{21} + B_{22}) = (2-4)(7+8) = -30
Step 3: Compute Final Submatrices
C11=M1+M4−M5+M7=65+8−24−30=19C_{11} = M_1 + M_4 - M_5 + M_7 = 65 + 8 - 24 - 30 = 19
C12=M3+M5=−2+24=22C_{12} = M_3 + M_5 = -2 + 24 = 22 C21=M2+M4=35+8=43C_{21} = M_2 + M_4
= 35 + 8 = 43 C22=M1−M2+M3+M6=65−35−2+22=50C_{22} = M_1 - M_2 + M_3 + M_6 = 65 - 35 - 2 + 22
= 50
Final Result:
C=[19224350]C = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix}
Time Complexity
Recursive Division: O(log n)O(\log n).
Number of Multiplications: Reduced to 77 (from 88).
Time Complexity: O(nlog 27)≈O(n2.81)O(n^{\log_2 7}) \approx O(n^{2.81}).
Advantages of Strassen’s Algorithm
1. Faster than traditional O(n3)O(n^3) for large matrices.
2. Useful in applications like computer graphics and scientific simulations.
Disadvantages
1. Overhead from recursive calls makes it slower for small matrices.
2. Requires additional memory for temporary matrices.
3. Numerical stability can be an issue due to extra additions and subtractions.