0% found this document useful (0 votes)
33 views15 pages

Dynamic Programming (DP) - Notes: Unit 4

Dynamic Programming (DP) is a technique for solving complex problems by breaking them into simpler subproblems and storing their solutions. Key concepts include the Principle of Optimality, which states that optimal solutions can be constructed from optimal subproblems, and various applications like the Knapsack Problem and All-Pairs Shortest Path. Additionally, the document covers graph theory, traversal algorithms, and optimization techniques such as Backtracking and Branch and Bound.

Uploaded by

trendcloth4u
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views15 pages

Dynamic Programming (DP) - Notes: Unit 4

Dynamic Programming (DP) is a technique for solving complex problems by breaking them into simpler subproblems and storing their solutions. Key concepts include the Principle of Optimality, which states that optimal solutions can be constructed from optimal subproblems, and various applications like the Knapsack Problem and All-Pairs Shortest Path. Additionally, the document covers graph theory, traversal algorithms, and optimization techniques such as Backtracking and Branch and Bound.

Uploaded by

trendcloth4u
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

DAA 1

UNIT 4

Dynamic Programming (DP) - Notes

Dynamic Programming (DP) is a problem-solving technique that involves breaking down a


complex problem into simpler subproblems and storing the solutions to these subproblems to avoid
redundant calculations. It is particularly useful for optimization problems where the solution can be
constructed incrementally.

Principle of Optimality

The Principle of Optimality states that:

"An optimal solution to the problem contains optimal solutions to the subproblems."

This means that an optimal solution to a problem can be derived from optimal solutions to its
subproblems. DP relies heavily on this principle, as it ensures that we can build up the solution of
the overall problem using the solutions to smaller subproblems.

In the context of DP:

1. Optimal substructure: The problem can be broken down into overlapping subproblems.
2. Overlapping subproblems: The same subproblems are solved multiple times.
The Knapsack Problem

The Knapsack Problem involves a set of items, each with a weight and a value, and a knapsack
with a weight capacity. The objective is to determine the maximum value of items that can be
placed in the knapsack without exceeding its weight limit.

There are two primary versions:

1. 0/1 Knapsack: Each item can either be included or excluded from the knapsack.
2. Fractional Knapsack: Items can be divided and fractions of them can be included.
All-Pairs Shortest Path

The All-Pairs Shortest Path (APSP) problem involves nding the shortest paths between every
pair of vertices in a graph. This is particularly useful for weighted graphs, where edges have varying
weights.

Two algorithms commonly used for APSP:

1. Warshall's Algorithm (for nding transitive closure of a directed graph):


◦ Used to determine the reachability between pairs of nodes, i.e., if there exists a path
from node i to node j.
◦ Uses a dynamic programming approach to iteratively compute reachability.
2. Floyd-Warshall Algorithm (for nding shortest paths):
◦ Solves the APSP problem for all pairs of nodes in a graph (can handle both negative
and positive edge weights but not negative cycles).
◦ Time complexity: O(V power of 3)

fi
fi
fi
DAA 2

O(V3), where V is the number of vertices.


Making Change Problem

The Making Change Problem involves nding the minimum number of coins needed to make a
given amount of change using a set of coin denominations.

Chained Matrix Multiplication

Matrix multiplication is associative, but matrix chain multiplication is not. The goal is to nd the
optimal way to parenthesize a sequence of matrix multiplications to minimize the number of scalar
multiplications.

Longest Common Subsequence (LCS)

The Longest Common Subsequence problem is to nd the longest subsequence common to two
sequences. A subsequence is derived by deleting some or no elements from a sequence without
changing the order of the remaining elements.

Summary of Common DP Techniques

• Memoization (Top-down): Recursively solve the problem and store the results to avoid
redundant calculations.
• Tabulation (Bottom-up): Iteratively solve subproblems, typically by lling a table, ensuring
all subproblems are solved before the main problem.

UNIT 5

Graph Theory and Algorithms

Graphs are fundamental data structures used in computer science to model relationships between
objects. A graphconsists of nodes (or vertices) and edges (or arcs) that connect these nodes. Graphs
are versatile and can represent numerous real-world problems such as social networks,
transportation systems, and computational problems like scheduling and routing.

1. Introduction to Graphs and Games

Graphs are often used to represent complex systems and their relationships. In the context of
games, a graph can be used to model state spaces and transitions between states.

• Vertices (Nodes) represent different states in a game or problem.


• Edges represent the possible moves or transitions from one state to another.
Examples in Games:

• Chess: Each con guration of the board is a node, and an edge exists between two nodes if
one con guration can be reached from the other by a valid move.

fi
fi
fi
fi
fi
fi
DAA 3
• Maze Solving: A maze can be represented as a graph where each cell is a vertex, and
adjacent cells are connected by edges.
Graph traversal algorithms help navigate through such state spaces and are crucial in AI,
path nding, and problem-solving in games.

2. Types of Graphs

Graphs can be classi ed based on various characteristics such as the direction of edges, weight of
edges, and whether they contain cycles. Here are the main types:

Undirected Graph

• De nition: An undirected graph is a graph in which the edges have no direction. The edge
between any two vertices u and v is denoted as {u, v}, meaning that you can traverse
from u to v and vice versa.
• Properties:
◦ If there is an edge {u, v}, it implies that u is connected to v and v is connected
to u.
◦ Undirected graphs are commonly used to model symmetric relationships, such as
friendships or bidirectional communication networks.
Directed Graph (Digraph)

• De nition: A directed graph (or digraph) is a graph in which the edges have a direction.
Each edge is an ordered pair (u, v), which means there is a directed edge from vertex u
to vertex v.
• Properties:
◦ The edge (u, v) indicates a one-way relationship from u to v, but there is no
edge in the reverse direction unless explicitly de ned.
◦ Directed graphs are used to model situations like webpages linking to each other or
one-way streets.
Weighted Graph

• De nition: In a weighted graph, each edge has a weight or cost associated with it. This is
useful in situations where edges represent costs, distances, or times (e.g., in road networks).
• Types:
◦ Weighted undirected graph: Each edge has a weight, and the relationship is
bidirectional.
◦ Weighted directed graph: Each directed edge has a weight, and the direction of the
edge matters.
Cyclic and Acyclic Graphs

• Cyclic Graph: A graph that contains at least one cycle (a path that starts and ends at the
same vertex).
• Acyclic Graph: A graph that does not contain any cycles.
◦ Directed Acyclic Graph (DAG): A directed graph with no cycles, commonly used
in scheduling, topological sorting, and data ow analysis.
3. Traversing Graphs

fi
fi
fi
fi
fi
fl
fi
DAA 4
Graph traversal refers to the process of visiting all the vertices and edges in a graph. There are two
primary methods for traversing a graph: Depth First Search (DFS) and Breadth First Search
(BFS).

Depth First Search (DFS)

DFS is an algorithm for traversing or searching a graph or tree. The algorithm starts at the root (or
any arbitrary node), explores as far as possible along each branch before backtracking.

• Steps:

1. Start at the root node.


2. Visit an adjacent node, mark it as visited, and then explore its neighbors.
3. If a node has no unvisited adjacent nodes, backtrack to the last node with unvisited
neighbors.
4. Repeat the process until all nodes are visited.
• Implementation: DFS can be implemented using recursion or a stack data structure.

1. Recursive Approach:
Code

def dfs(graph, node, visited):


visited.add(node)
for neighbor in graph[node]:
if neighbor not in visited:
dfs(graph, neighbor, visited)
Non-recursive Approach (using Stack):

def dfs(graph, start):


visited = set()
stack = [start]
while stack:
node = stack.pop()
if node not in visited:
visited.add(node)
stack.extend(graph[node] - visited)

• Properties:

◦ DFS can be used for cycle detection, path nding, and topological sorting in directed
acyclic graphs (DAGs).
◦ It uses O(V) space for recursion stack and can be slower compared to BFS in some
cases.
Breadth First Search (BFS)

BFS is an algorithm for traversing or searching a graph in which the exploration starts at the root
node and visits all of its neighbors rst, then moves on to the next level neighbors.

• Steps:

2. Start at the root node and visit all its neighbors.



fi
fi
DAA 5
3. Enqueue all unvisited neighbors in a queue.
4. Dequeue the front node and repeat the process until all nodes are visited.

Implementation: BFS uses a queue to manage the nodes to be visited.

from collections import deque

def bfs(graph, start):


visited = set()
queue = deque([start])
while queue:
node = queue.popleft()
if node not in visited:
visited.add(node)
queue.extend(graph[node] - visited)

• Properties:

◦ BFS is ideal for nding the shortest path in an unweighted graph because it explores
nodes level by level.
◦ Time complexity: O(V + E), where V is the number of vertices and E is the number
of edges.
4. Topological Sort

Topological sorting is an algorithm that arranges the vertices of a directed graph in a linear order,
such that for every directed edge (u, v), vertex u comes before vertex v in the ordering.

Topological sort is only possible in Directed Acyclic Graphs (DAGs), since a cycle would prevent
a linear ordering.

Algorithm (using DFS):

1. Perform a DFS on the graph.


2. Mark nodes as visited when explored.
3. After exploring all neighbors of a node, add the node to the topological order.
4. Reverse the topological order to get the nal sorted order.
Steps:

1. Perform DFS traversal of the graph and keep track of the order in which nodes are fully
processed (when all their neighbors have been visited).
2. Reverse the order of processed nodes for the topological sort.
Time Complexity: The time complexity of topological sorting is O(V + E), where V is the number
of vertices and E is the number of edges.

Applications:

• Task scheduling (e.g., job scheduling where some tasks must be completed before others).
• Compiler design (ordering of compilation tasks).
• Dependency resolution (e.g., package managers).
Summary of Graph Traversal Algorithms:

fi
fi
DAA 6

DFS Any Graph Path nding, Topological Sort, Cycle Detection O(V + E)
Shortest Path (unweighted), Level-wise
BFS Any Graph O(V + E)
Traversal
Topological
DAG Task Scheduling, Dependency Resolution O(V + E)
Sort

UNIT 6

Backtracking and Branch and Bound

Backtracking and Branch and Bound are two fundamental problem-solving techniques often used in
optimization problems, searching, and combinatorial problems. They involve exploring all possible
solutions systematically and pruning the search space to avoid unnecessary calculations.

1. Introduction to Backtracking

Backtracking is a general algorithmic technique for nding solutions to problems by incrementally


building candidates to the solution and abandoning (or "backtracking") as soon as it is determined
that the candidate cannot possibly be extended to a valid solution.

• Backtracking Algorithm: Involves trying different possibilities and "backtracking" when a


solution path is no longer viable.
• How it works:
1. Begin at an initial state (starting point).
2. Move step by step, exploring all possibilities.
3. When you reach a state where no further progress can be made (i.e., a dead end),
backtrack to the previous state and try a new path.
4. This continues until a valid solution is found or all possibilities are exhausted.
Backtracking is used in problems where the solution involves constructing a combination or
permutation of items, and it’s feasible to explore all possibilities systematically.

• Key Idea: It’s an optimization technique where the algorithm constructs a solution step by
step and prunes solutions that are not promising (no need to explore them further).
• Examples:
◦ N-Queens problem
◦ Sudoku Solver
◦ Subset Sum Problem

2. The Eight Queens Problem

The Eight Queens Problem is a classic backtracking problem where the task is to place 8 queens
on a chessboard such that no two queens threaten each other. A queen can attack another queen if
they are in the same row, column, or diagonal.

• Problem Setup:

◦ The goal is to place 8 queens on an 8x8 chessboard.


◦ The queens must not share the same row, column, or diagonal.

fi
fi
DAA 7
• Backtracking Approach:

◦ Place the rst queen in the rst row and move to the next row.
◦ Try placing a queen in each column of the next row.
◦ If placing a queen leads to a valid con guration (i.e., no two queens threaten each
other), move to the next row.
◦ If a con ict occurs, backtrack to the previous row and try a new position for the
queen.
◦ Repeat until all queens are placed or all con gurations have been exhausted.
• Recursive Solution:

◦ Use recursion to place queens row by row.


◦ At each row, check if the current placement is safe (no con icts).
◦ If placing a queen is safe, proceed to the next row.
◦ If no valid placement is found for a row, backtrack and try a different column in the
previous row.
Time Complexity: The worst-case time complexity for the 8-Queens problem is O(n!) because
there are n! ways to place the queens on the board.

3. Branch and Bound

Branch and Bound is an algorithmic method used for solving optimization problems where the
solution space is large and exhaustive search is computationally expensive. It is commonly used in
combinatorial optimization problems, like the Knapsack Problem and the Travelling Salesman
Problem (TSP).

• Key Concepts:
◦ Branching: The process of dividing the problem into subproblems.
◦ Bounding: The process of determining an upper or lower bound on the optimal
solution for the subproblem.
◦ Pruning: If a subproblem’s bound indicates it cannot yield a better solution than the
best solution found so far, it is discarded.
• How Branch and Bound works:
◦ Branch: Break down the problem into smaller subproblems.
◦ Bound: Calculate a bound (upper or lower) for the subproblem. If the bound
indicates that the subproblem cannot lead to a better solution, discard it.
◦ Prune: Eliminate subproblems that cannot provide a better solution.
◦ Search: Continue the process until all possible subproblems have been explored or
pruned.

4. Knapsack Problem (Branch and Bound)

The Knapsack Problem is a classic optimization problem in which you are given a set of items,
each with a weight and a value. The objective is to select a subset of these items such that the total
weight does not exceed a given limit (capacity of the knapsack), and the total value is maximized.

• Problem De nition:
◦ Given:
▪ A set of items, each with a weight w_i and a value v_i.
▪ A maximum weight capacity W.

fl
fi
fi
fi
fi
fi
fl
DAA 8
◦ Find: The subset of items that maximizes the total value without exceeding the
capacity W.
• Branch and Bound Approach:
◦ Branching: In each node of the tree, decide whether to include the current item in
the knapsack or not.
◦ Bounding: For each node, calculate the upper bound on the maximum value that can
be achieved from the current subset of items. A common bounding technique is to
use a greedy approach, where we take as many high-value items as possible, lling
the knapsack.
◦ Pruning: If the bound of a node indicates that it is worse than the current best
solution, prune that branch.
• Bounding Function: A common bounding function involves using the fractional knapsack
solution (where you can take fractions of items) to estimate an upper bound.
Time Complexity: The worst-case time complexity for the Branch and Bound algorithm depends
on the problem, but typically, it reduces the search space compared to brute force approaches.

5. Travelling Salesman Problem (TSP)

The Travelling Salesman Problem (TSP) involves nding the shortest possible route that visits
every city in a given list exactly once and returns to the starting city.

• Problem De nition:

◦ Given: A set of cities and the distances between each pair of cities.
◦ Find: The shortest possible route that visits each city once and returns to the starting
point.
• Branch and Bound Approach:

◦ Branching: In each step, choose the next city to visit, and divide the problem into
subproblems based on possible paths.
◦ Bounding: Calculate an upper bound (the minimum possible route length) for the
subproblem. A common bounding technique involves calculating the lower bound
using minimum spanning tree or nearest neighbor heuristics.
◦ Pruning: If the bound indicates that a subproblem cannot produce a better solution
than the current best, discard that path.
• Optimizing the Bound: A common approach for bounding is to use minimum spanning
tree (MST), where the tree provides a lower bound on the length of the TSP tour.

Time Complexity:

• The exact solution for TSP has a factorial time complexity O(n!) for brute force.
• Branch and Bound improves performance but does not guarantee polynomial time in all
cases. However, for many practical instances, it can solve TSP problems ef ciently by
pruning large parts of the search space.

Comparison of Backtracking vs. Branch and Bound

Method Main Idea Use Case Time Complexity


Backtr Explore all possible solutions Constraint satisfaction
Exponential in the worst case
acking and backtrack when a solution is problems (e.g., N-
not feasible. Queens, Sudoku)

fi
fi
fi
fi
DAA 9
Branch Solve optimization problems by Optimization problems Exponential in the worst case
and systematically pruning the (e.g., Knapsack, TSP) but can be more ef cient
Bound search space. with pruning

UNIT 7

Notes on Complexity Theory and Advanced Algorithms

In computer science, understanding the complexity of problems and their solutions is fundamental.
The class P and NP, NP-Completeness, NP-Hard problems, as well as randomization and
approximation algorithms, are key topics in the theory of computational complexity. These
concepts are essential for classifying problems based on how dif cult they are to solve and for
nding ef cient solutions to otherwise intractable problems.

1. The Class P and NP

Class P

• De nition: The class P refers to the set of decision problems (problems with yes/no
answers) that can be solved in polynomial time.
◦ A problem is in P if there exists an algorithm that can solve it in O(n^k) time, where
n is the size of the input and k is a constant.
◦ Example:
▪ Sorting algorithms like Merge Sort and Quick Sort (O(n log n) time).
▪ Breadth-First Search (BFS) and Depth-First Search (DFS) for graph
traversal (O(V + E)).
Class NP

• De nition: The class NP (Nondeterministic Polynomial time) refers to the set of decision
problems for which a solution can be veri ed in polynomial time.
◦ A problem is in NP if, given a solution (or a proposed answer), it can be veri ed in
polynomial timewhether the solution is correct.
◦ Note: A problem can belong to NP even if we don’t know how to nd the solution
ef ciently.
◦ Example:
▪ The Traveling Salesman Problem (TSP): Given a set of cities and a path,
you can verify in polynomial time if the path visits every city exactly once
and returns to the starting city.
Key Relationship: P vs NP

• If a problem is in P, it is also in NP, because any solution that can be computed in


polynomial time can also be veri ed in polynomial time.
• However, it is not known whether P = NP or P ≠ NP. This is one of the most famous
unsolved questions in computer science. If P = NP, it would imply that for every problem in

fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
DAA 10
NP, there exists a polynomial-time algorithm to solve it, which would drastically change our
approach to computational problems.

2. Polynomial Reduction

• De nition: Polynomial-time reduction is a way of transforming one problem into another in


polynomial time.
◦ If you can reduce problem A to problem B in polynomial time, it means that if you
can solve B, you can solve A as well by transforming the input of A into the input of
B and solving B.
• Example:
◦ Suppose problem A can be reduced to problem B in polynomial time. If we know
that B is solvable in polynomial time (i.e., B is in P), then we can solve A in
polynomial time as well.
• Reduction in NP:
◦ If we can reduce a problem A (in NP) to another problem B in NP using a
polynomial-time reduction, and if we know that B is NP-complete, then A must also
be NP-complete.

3. NP-Completeness Problem

NP-Complete

• De nition: A problem is NP-complete if:

5. It is in NP (its solution can be veri ed in polynomial time).


6. Every other problem in NP can be reduced to it in polynomial time (this is called
polynomial-time reducibility).
• In other words, NP-complete problems are the hardest problems in NP. If you can solve one
NP-complete problem in polynomial time, you can solve all NP problems in polynomial
time (i.e., P = NP).

• Signi cance of NP-Complete:

1.If a polynomial-time solution exists for an NP-complete problem, then all problems
in NP can be solved in polynomial time (this would imply P = NP).
2. If no polynomial-time solution exists for any NP-complete problem, it suggests that
P ≠ NP.
Famous NP-Complete Problems:

• Traveling Salesman Problem (TSP): Given a set of cities and distances between them,
determine the shortest possible route that visits each city exactly once and returns to the
starting city.
• Knapsack Problem: Given a set of items, each with a weight and a value, determine the
subset of items that maximizes value while staying within a given weight limit.
• Boolean Satis ability Problem (SAT): Given a boolean formula, determine if there is an
assignment of truth values to the variables that makes the formula true.
• Vertex Cover: Given a graph, nd the smallest set of vertices such that every edge in the
graph is incident to at least one vertex in the set.
Cook-Levin Theorem:

fi
fi
fi
fi
fi
fi
DAA 11
• In 1971, Stephen Cook proved that the Boolean Satis ability Problem (SAT) was NP-
complete, meaning that SAT is both in NP and every problem in NP can be reduced to it in
polynomial time. This was the rst NP-complete problem.

4. NP-Hard Problems

• De nition: A problem is NP-Hard if:

3. Every problem in NP can be reduced to it in polynomial time.


4. An NP-hard problem may or may not be in NP.
• NP-hard problems are at least as hard as the hardest problems in NP, but they are not
necessarily decision problems (they may not have a yes/no answer). Essentially, an NP-hard
problem may not even have a solution that can be veri ed in polynomial time.

• Difference between NP-Hard and NP-Complete:

1. NP-Complete problems are both in NP and NP-Hard.


2. NP-Hard problems might not belong to NP, meaning they may not have polynomial-
time veri able solutions (e.g., optimization problems).
Example of NP-Hard Problems:

• TSP (Optimization version): The optimization version of the Traveling Salesman Problem,
which asks for the minimum cost of the tour, is NP-hard because it requires nding the
shortest path, not just verifying one.
• Halting Problem: Given a program and an input, decide whether the program will halt or
run forever. It is NP-hard but not in NP.

5. Introduction to Randomization and Approximation Algorithms

Randomized Algorithms

• De nition: A randomized algorithm uses random numbers to in uence its behavior during
execution. The output of a randomized algorithm may not be the same every time it is run.
◦ Types:
▪ Las Vegas algorithms: Always produce a correct result but may take a
random amount of time (e.g., Randomized QuickSort).
▪ Monte Carlo algorithms: Run in polynomial time but may produce an
incorrect result with some probability (e.g., Randomized Primality
Testing).
• Applications:
◦ Randomized Quicksort: A random pivot is chosen to improve the average-case
performance.
◦ Probabilistic Graph Algorithms: For problems like Minimum Spanning Tree
(MST) or Max Cut in large graphs.
Approximation Algorithms

• De nition: An approximation algorithm is an algorithm used for NP-hard problems that


nds an approximate solution, typically in polynomial time, that is close to the optimal
solution.

fi
fi
fi
fi
fi
fi
fi
fi
fl
fi
DAA 12
◦ These algorithms are especially useful for optimization problems where nding the
exact solution is computationally infeasible.
• Approximation Ratio: The approximation algorithm guarantees that the solution found will
be within a certain factor of the optimal solution.

• Example:

◦ Greedy Approximation for Knapsack: For the Fractional Knapsack Problem, a


greedy algorithm that picks items with the highest value-to-weight ratio gives an
optimal solution.
◦ Vertex Cover: A 2-approximation algorithm for the vertex cover problem always
nds a solution that is at most twice the size of the optimal solution.

Summary of Key Concepts

Concept Description
Class P Problems solvable in polynomial time.
Class NP Problems whose solutions can be veri ed in polynomial time.
NP-Complete The hardest problems in NP; if one can be solved in polynomial time, so can
all NP problems.
NP-Hard Problems that are at least as hard as the hardest NP problems, but may not
be in NP.
Polynomial Transforming one problem to another in polynomial time.
Reduction
Randomized Algorithms that use randomness to solve problems, providing approximate
Algorithms or probabilistic solutions.
Approximation Algorithms that nd near-optimal solutions for NP-hard problems in
Algorithms polynomial time.

UNIT 2

Notes on Divide-and-Conquer Algorithms

Divide and Conquer is a powerful algorithmic paradigm used to solve problems by breaking them
down into smaller subproblems, solving those subproblems independently, and combining the
results to solve the original problem. This approach is especially useful for problems that can be
recursively decomposed.

1. Structure of Divide-and-Conquer Algorithms

A typical divide-and-conquer algorithm follows three basic steps:

1. Divide: Break the problem into smaller subproblems. The division is typically done by
dividing the problem into two or more smaller subproblems of the same type.
2. Conquer: Recursively solve the subproblems. If the subproblems are small enough, they are
solved directly without further division.
3. Combine: Combine the solutions of the subproblems to form the solution to the original
problem. This step typically involves some additional work to merge the results.
General Structure:

fi
fi
fi
fi
DAA 13
• Base Case: When the problem is small enough (usually of size 1), solve it directly without
dividing further.
• Recursive Step: Break the problem into smaller subproblems, solve each recursively, and
combine the solutions.
2. Examples of Divide-and-Conquer Algorithms

1. Binary Search

Problem: Given a sorted array, nd the position of a target element.

• Divide: Split the array into two halves and compare the target element with the middle
element.
• Conquer: If the target is smaller than the middle element, search the left half; if larger,
search the right half.
• Combine: Since each step reduces the size of the problem by half, no complex combining is
needed. The solution is obtained by recursive search.
Algorithm (Pseudocode):

python
Copy code
def binary_search(arr, target):
low = 0
high = len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return mid # Target found
elif arr[mid] < target:
low = mid + 1 # Search right half
else:
high = mid - 1 # Search left half
return -1 # Target not found
Time Complexity:

• The problem size is halved at each step, so the time complexity is O(log n).
2. Quick Sort

Problem: Sort an array of elements.

• Divide: Pick a "pivot" element and partition the array into two subarrays—one with
elements smaller than the pivot and the other with elements larger than the pivot.
• Conquer: Recursively sort the two subarrays.
• Combine: After sorting the two subarrays, combine the results to get the sorted array. The
merging step is implicit in the partitioning process.
Algorithm (Pseudocode):

python
Copy code
def quick_sort(arr):
if len(arr) <= 1:

fi
DAA 14
return arr # Base case: single element is already
sorted
pivot = arr[len(arr) // 2] # Choose a pivot element
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
Time Complexity:

• Best case: O(n log n) when the pivot divides the array evenly.
• Worst case: O(n²) when the pivot is poorly chosen (e.g., always the smallest or largest
element).
• Average case: O(n log n) for most random distributions.
3. Merge Sort

Problem: Sort an array of elements.

• Divide: Split the array into two halves.


• Conquer: Recursively sort the two halves.
• Combine: Merge the two sorted halves into a single sorted array.
Algorithm (Pseudocode):

python
Copy code
def merge_sort(arr):
if len(arr) <= 1:
return arr # Base case: array with one element is
already sorted
mid = len(arr) // 2
left = merge_sort(arr[:mid]) # Sort the left half
right = merge_sort(arr[mid:]) # Sort the right half
return merge(left, right)

def merge(left, right):


result = []
while left and right:
if left[0] < right[0]:
result.append(left.pop(0))
else:
result.append(right.pop(0))
result.extend(left or right) # Append any remaining
elements
return result
Time Complexity:

• Best, Worst, and Average case: O(n log n), because the array is divided in half at each step
and each step takes linear time to merge.
4. Strassen’s Matrix Multiplication

DAA 15
Problem: Multiply two matrices A and B.

• Divide: Split each matrix into four smaller submatrices.


• Conquer: Use the divide-and-conquer approach to recursively compute products of
submatrices.
• Combine: Combine the submatrices to compute the nal result.
Algorithm:

1. Split matrices into 4 submatrices.


2. Perform 7 matrix multiplications (instead of 8 in the naive approach) using clever additions
and subtractions.
3. Combine the results.
Time Complexity:

• Strassen’s algorithm reduces the number of multiplications from O(n³) to O(n^log₂7) ≈


O(n².81). This is faster than the standard matrix multiplication algorithm.

3. Analysis of Divide-and-Conquer Run-Time Recurrence Relations

The time complexity of divide-and-conquer algorithms can often be expressed using recurrence
relations. A recurrence relation describes the total time taken by an algorithm in terms of the time
taken for smaller subproblems.

fi

You might also like