0% found this document useful (0 votes)
23 views57 pages

Module 4

The document outlines various computational approaches to problem-solving, including Brute-force, Divide-and-Conquer, Dynamic Programming, and Greedy Algorithms, each with examples and characteristics. It discusses the advantages and limitations of each approach, emphasizing their applicability and efficiency in different scenarios. The document serves as a comprehensive guide for understanding these methodologies in computer science and algorithm design.

Uploaded by

Sravana Ps
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views57 pages

Module 4

The document outlines various computational approaches to problem-solving, including Brute-force, Divide-and-Conquer, Dynamic Programming, and Greedy Algorithms, each with examples and characteristics. It discusses the advantages and limitations of each approach, emphasizing their applicability and efficiency in different scenarios. The document serves as a comprehensive guide for understanding these methodologies in computer science and algorithm design.

Uploaded by

Sravana Ps
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

Module 4

COMPUTATIONAL APPROACHES TO
PROBLEM-SOLVING

Badharudheen P
Assistant Professor,
Dept. of CSE, MESCE, Kuttippuram
Syllabus
Brute-force Approach - Example: Padlock, Password guessing.
Divide-and-conquer Approach - Example: The Merge Sort Algorithm, Advantages and
Disadvantages of Divide and Conquer Approach.
Dynamic Programming Approach - Example: Fibonacci series, Recursion vs Dynamic
Programming.
Greedy Algorithm Approach - Example: Given an array of positive integers each indicating
the completion time for a task, find the maximum number of tasks that can be completed in the
limited amount of time that you have. Motivations, Characteristics of the Greedy Algorithm.
Greedy Algorithms vs Dynamic Programming.
Randomized Approach - Example 1: A company selling jeans gives a coupon for each pair of
jeans. There are n different coupons. Collecting n different coupons would give you free jeans.
How many jeans do you expect to buy before getting a free one?.
1. Brute-force Approach
◼ Brute Force is a straightforward method used in problem-solving that checks
every possible solution until the correct one is found.
◼ Brute Force Algorithms function by searching each element sequentially until
the desired result is found or all options are exhausted.
◼ Example: Padlock.
◼ Imagine a small padlock with 4 digits, each from 0-9. You forgot your
combination, but you don't want to buy another padlock. Since you can’t
remember any of the digits, you have to use a brute force method to open the
lock. So, you set all the numbers back to 0 and try them one by one: 0001,
0002, 0003, and so on until it opens. In the worst-case scenario, it would take
104, or 10,000 tries to find your combination.
Brute-force Approach

◼ Example: Password Guessing.


◼ Some of the most commonly found passwords in brute force lists include: date
of birth, children's names, qwerty, 123456, abcdef123, a123456, abc123,
password, asdf, hello, welcome, zxcvbn, qazwsx, 654321, 123321, 000000,
111111, 987654321, 1q2w3e, 123qwe, qwertyuiop.
Characteristics of Brute-force Approach

◼ Exhaustive Search: Every possible solution is examined without any


optimization.
◼ Simplicity: Easy to understand and implement.
◼ Inefficiency: Often slow and resource-intensive due to the large number of
possibilities.
◼ Guaranteed Solution: If a solution exists, the brute-force method will
eventually find it.
Problem: String Matching
◼ Find all occurrences of a pattern within a text.
◼ The idea is to slide the pattern over the text one character at a time and
check if the pattern matches the substring of the text starting at the
current position.
◼ Steps to follow:
1. Start at the beginning of the text
2. Check for a match
3. Move to the next position
4. Repeat steps 2 and 3 till the end of the text.
5. Finish
String Matching: Implementation
def bf_match(text, pattern):
n = len(text)
m = len(pattern)
for i in range(n - m + 1):
substring = text[i:i + m]
if substring == pattern:
print(“Pattern found at index”, i)

text = "ABACDABABCABAB"
pattern = "ABABC"
bf_match(text, pattern)
Advantages

◼ Guaranteed Solution: ensure finding a solution if one exists within the


predefined constraints.

◼ Simplicity: The approach is straightforward to implement and understand,


requiring minimal algorithmic complexity.

◼ Versatility: Applicable across various domains where an exhaustive search is


feasible, such as puzzle solving, cryptography, etc.
Limitations

◼ Computational Intensity: It can be highly resource-intensive, especially for


problems with complex constraints.

◼ Time Complexity: Depending on the problem size, it may require long


execution times to find solutions.

◼ Scalability Issues: In scenarios with exponentially growing solution spaces,


this methods may become impractical or infeasible to execute within
reasonable time constraints.
2. Divide-and-Conquer Approach
◼ This approach involves breaking a larger problem into smaller subproblems,
solving them independently, and then combining their solutions to solve the
original problem.
◼ Here, a problem is solved recursively by applying three steps at each level of
the recursion: Divide, Conquer, and Combine.

◼ Divide:
◼ Break down the original problem into smaller subproblems.

◼ Each subproblem should represent a part of the overall problem.

◼ The goal is to divide the problem until no further division is possible.


Divide-and-Conquer Approach
◼ Conquer:
◼ Solve each of the smaller subproblems individually.
◼ If a subproblem is small enough, we solve it directly without further recursion.
◼ The goal is to find solutions for these subproblems independently.

◼ Combine:
◼ Combine the sub-problems to get the final solution of the whole problem.
◼ Once the smaller subproblems are solved, we recursively combine their
solutions to get the solution of larger problem.
◼ The goal is to formulate a solution for the original problem by merging the
results from the subproblems.
Example: The Merge Sort Algorithm
Merge Sort: Divide Phase
Example: The Merge Sort Algorithm
Merge Sort: Combine Phase
Merge Sort
◼ Funtion mergeSort() :
◼ Check if the array has one or zero elements. If true, return the array as it is
already sorted.
◼ Otherwise, find the middle index of the array.
◼ Split the array into two halves: from the beginning to the middle and from the
middle to the end.
◼ Recursively apply mergeSort() to the first half and the second half.

◼ Merge the two sorted halves using the merge() function.


◼ Return the merged and sorted array.
Merge Sort
◼ Funtion merge() :
◼ Create an empty list called sorted_arr to store the sorted elements.
◼ While both halves have elements:
◼ Compare the first element of the left half with the first element of the right
half.
◼ Remove the smaller element and append it to the sorted_arr list.

◼ If the left half still has elements, append them all to the sorted_arr list.
◼ If the right half still has elements, append them all to the sorted_arr list.
◼ Return the sorted_arr list, which now contains the sorted elements from both
halves.
Problem: Finding the Maximum Element in an Array
◼ Given an array of integers, find the maximum value in the array.
1. Initial Setup: Begin with the entire array. The range includes the entire array from the first
element to the last element.
2. Divide: If the array contains more than one element, split it into two approximately equal
halves. This splitting continues recursively until each subarray has only one element.
3. Conquer:
1. For subarrays with only one element, that element is trivially the maximum for that
subarray.
2. For larger subarrays, recursively apply the same process to each half.
4. Combine: After finding the maximum element in each of the smaller subarrays, combine
the results by comparing the maximum values from each half. Return the largest of these
values as the maximum for the original array
Advantages of Divide and Conquer Algorithms
◼ Simplicity: It is easy to understand and implement.
◼ It allows us to solve difficult problems.
◼ It plays major role in other algorithms like quick sort and merge sort algorithms.
◼ Efficiency: Lower time complexities compared to iterative approaches
◼ Modularity: Promotes a modular approach to problem-solving.
◼ It uses memory caches effectively: when the sub-problems become simple enough,
they can be solved within a cache.
◼ Parallelism: Subproblems can be solved independently and simultaneously on
different processors
Limitations of Divide and Conquer Algorithms
◼ Recursion overhead: It uses recursion, which can lead to significant overhead in
terms of stack space and function calls.
◼ Not suitable for all problems: Effective for problems that can be recursively divided
into smaller subproblems.
◼ Increased Memory Usage: Require a significant amount of memory for storing
intermediate results.
◼ Difficult to analyze: The time and space complexity of divide and conquer algorithms
can be difficult to analyze, especially for complex problems.
3. Dynamic Programming Approach
◼ Dynamic programming approach solve complex problems by breaking them
down into simpler subproblems.
◼ By solving each subproblem only once and storing the results, it avoids
redundant computations.
◼ The underlying idea is to avoid calculating the same result twice. This is
usually accomplished by constructing a table in memory, and filling it with
known results as they are calculated (memorization).
◼ These results are then used to solve larger sub-problems.
◼ Note:- Retrieving a given result from this table takes Θ(1) time.
How Does Dynamic Programming (DP) Work?

◼ Identify Subproblems: Divide the main problem into smaller, independent


subproblems.
◼ Store Solutions: Solve each subproblem and store the solution in a table or
array.
◼ Build Up Solutions: Use the stored solutions to build up the solution to the
main problem.
◼ Avoid Redundancy: By storing solutions, DP ensures that each subproblem is
solved only once, reducing computation time.
Example: Shortest Path in a Grid
◼ Imagine you need to find the shortest path from the top-left corner to the
bottom-right corner of a grid. You can only move right or down. Each cell in
the grid has a certain cost associated with it, and your goal is to minimize the
total cost of the path.
◼ How it Works?
• You start by solving the problem for the smallest subproblems (the cells
directly above and to the left).
• You then build up solutions incrementally, using the results of the smaller
subproblems to solve larger parts of the grid.
• Finally, you combine the results to find the shortest path to the bottom-right
corner of the grid.
Example: Fibonacci Series
◼ A Fibonacci series is the sequence of numbers in which each number is the sum
of the two preceding ones. For example, 0,1,1, 2, 3. Here, each number is the
sum of the two preceding numbers.

• Subproblems: F(0), F(1), F(2), F(3), …


• Store Solutions: Create a table to store the
values of F(n) as they are calculated.
• Build Up Solutions: For F(n), look up F(n-1)
and F(n-2) in the table and add them.
• Avoid Redundancy: The table ensures that
each subproblem (e.g., F(2)) is solved only
once.
Approaches in Dynamic Programming

◼ Two main approaches:


◼ Memoization (top-down)

◼ Tabulation (bottom-up)
Memoization (top-down)

◼ Memoization involves solving the problem recursively and storing the results
of subproblems in a table (usually a dictionary or array).

◼ Steps:
1. Identify the base cases.
2. Define the recursive relation.
3. Store the results of subproblems in a table.
4. Use the stored results to solve larger subproblems.
Memoization (top-down)
def fib(n, memo = {}):
if n <= 1:
return n
{0:0, 1:1, 2:1, 3:2, 4:3, 5:5, 6:8, ...}
if n in memo:
return memo[n]

memo[n] = fib(n-1, memo) + fib(n-2, memo)


return memo[n]
x = fib(6)
print(x) # 8
Tabulation (bottom-up)

◼ Tabulation involves solving the problem iteratively and filling up a table


(usually an array) in a bottom-up manner.
◼ This approach starts with the smallest subproblems and uses their solutions to
construct solutions to larger subproblems.

◼ Steps:
1. Identify the base cases.
2. Define the table to store solutions to subproblems.
3. Fill the table iteratively using the recursive relation.
4. Extract the solution to the original problem from the table.
Tabulation (bottom-up)
def fib(n):
if n <= 1: 0 1 1 2 3 5 8 13 21
return n 0 1 2 3 4 5 6 7 8

dp = [0] * (n + 1)
dp[1] = 1
for i in range(2, n + 1):
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
x = fib(6)
print(x) # 8
Recursion vs Dynamic Programming
◼ Recursion:
◼ A function calls itself until it reaches a base case, and the process is tree-like.
Recursion doesn't store values until the base case is reached. Recursion is
easier to understand and use, but it might not be as effective as dynamic
programming in some cases.

◼ Dynamic programming:
◼ A problem is broken down into smaller sub-problems, and the results are saved
in a table for later use. Dynamic programming is an optimization method that
can reduce time complexities. Dynamic programming can be harder to
implement, but it can lead to big performance improvements for some
problems.
Fundamental Principles of Dynamic Programming
◼ Overlapping Subproblems:
◼ Dynamic Programming is useful for problems with overlapping subproblems.
When solving a larger problem, you encounter smaller subproblems that are
repeated multiple times. Instead of recomputing these subproblems each time,
DP saves their solutions in a data structure, such as an array or hash table. This
avoids redundant calculations and significantly improves efficiency.

◼ Optimal Substructure:
◼ It means that an optimal solution to the larger problem can be constructed from
the optimal solutions to its smaller subproblems. In other words, if you can
determine the best solution for smaller problems, you can use these solutions to
build the best solution for the entire problem.
Advantages of Dynamic Programming

◼ Efficiency: DP reduces the time complexity of problems with overlapping


subproblems by storing solutions to subproblems and reusing them.
◼ Optimal Solutions: DP ensures that the solution to the problem is optimal by
solving each subproblem optimally and combining their solutions.
◼ Versatility: DP can be applied to a wide range of problems across different
domains.
Limitations of Dynamic Programming

◼ Space Complexity: DP often requires additional memory to store the results of


subproblems, which can be a limitation for problems with a large number of
subproblems.
◼ Complexity of Formulation: Developing a DP solution requires a deep
understanding of the problem’s structure and properties, which can be
challenging.
◼ Overhead of Table Management: Managing and maintaining the DP table or
memorization structure can add overhead to the algorithm.
4. Greedy Algorithm Approach
◼ A greedy algorithm is a problem-solving technique that makes the best local
choice at each step in the hope of finding the global optimum solution.
◼ It makes decisions based on the current situation without considering future
implications.
◼ These algorithms do not reconsider previous choices.
◼ Steps for Creating a Greedy Algorithm:
◼ Define the problem: Clearly state the problem to be solved
◼ Identify the greedy choice: Determine the locally optimal choice at each step
based on the current state.
◼ Make the greedy choice: Select the greedy choice and update the current state.
◼ Repeat: Continue making greedy choices until a solution is reached.
Examples of Greedy Algorithm
◼ Dijkstra’s Algorithm: It finds the shortest path between two nodes in a graph
by repeatedly choosing the shortest edge available from the current node.
◼ Scheduling and Resource Allocation: The greedy algorithm can be used to
schedule jobs or allocate resources in an efficient manner.
◼ Coin Change Problem: The greedy algorithm can be used to make change for a
given amount with the minimum number of coins, by always choosing the
coin with the highest value that is less than the remaining amount to be
changed.
Example 1
◼ Let’s say you have a set of coins with values {1, 2, 5, 10, 20, 50, 100} and you
need to give minimum number of coin to someone change for 36 .
1. Start with the largest coin value that is less than or equal to the amount
to be changed. In this case, the largest coin less than 36 is 20 .
2. Subtract the largest coin value from the amount to be changed, and add
the coin to the solution. In this case, subtracting 20 from 36 gives 16,
and we add a 20 coin to the solution.
3. Repeat steps 1 and 2 until the amount to be changed becomes 0.

So, using the greedy algorithm, the solution for making change for 36 would be
one 20 coins, one 10 coin, one 5 coins and one 1 coin needed.
Example 2
◼ Given an array of positive integers {8, 2, 10, 6, 3, 20, 7, 9, 15} each indicating
the completion time in seconds for a task, find the maximum number of tasks
that can be completed in one minute.

◼ Solution Steps:
◼ Sort the tasks by their completion times in ascending order
◼ Iterate through the sorted list of tasks and keep track of the total time and count
of tasks completed.
Key Features of the Greedy Approach

◼ Local Optimization: Local optimal decisions will lead to a globally optimal


solution.
◼ Irrevocable Decisions: Once a choice is made, it cannot be changed. The
algorithm proceeds to the next step, making another locally optimal choice.
◼ Efficiency: Greedy algorithms are typically easy to implement and run quickly,
as they make decisions based on local information and do not need to consider
all possible solutions.
Motivations for the Greedy Approach - Advantages

◼ Simplicity and Ease of Implementation


◼ Efficiency in Time and Space
◼ Optimal Solutions for Specific Problems
◼ Real-World Applicability: Useful in many real-world scenarios like
scheduling, network routing, and resource allocation.
Limitations

◼ Suboptimal Solutions: Most effective when the problem has the greedy-choice
property, meaning a global optimum can be reached by making local optimal
choices.
◼ Irrevocable Decisions: Once a choice is made, it cannot be changed.
◼ Lack of Backtracking: Greedy algorithms do not explore all possible solutions
or backtracks, which means they can miss better solutions.
Greedy Algorithm Vs Dynamic Programming

Criteria Greedy Algorithm Dynamic Programming

Makes the locally optimal choice at each Solves subproblems and builds up to the
Basic Idea
stage optimal solution

Not always guaranteed to provide the


Optimal Solution Guarantees the globally optimal solution
globally optimal solution

Usually, slower due to solving


Time Complexity Typically, faster
overlapping subproblems.

Requires more memory due to storing


Space Complexity Requires less memory
intermediate results
Greedy Algorithm Vs Dynamic Programming

Criteria Greedy Algorithm Dynamic Programming

Subproblems Does not handle overlapping Handles overlapping subproblems


Overlapping subproblems efficiently

Used when a greedy choice at each step Applied when the problem can be broken
Applications
leads to the globally optimal solution down into overlapping subproblems
Randomized Approach
◼ Randomized approach uses randomness to help solve a problem.
◼ Randomized algorithms are often used when it's difficult to find an exact
solution or when the problem involves uncertainty.
◼ They can provide probabilistic guarantees on the quality of their solutions.
◼ Randomized algorithms can often provide faster solutions or better
approximations.

◼ How they work:


◼ Randomized algorithms use random numbers to guide their behavior.
They can use a coin flip or a card drawn from a deck to make decisions.
Randomized Approach
◼ When to use them
◼ Randomized algorithms are often used when it's difficult to find an exact
solution or when the problem involves uncertainty.

◼ How they're implemented:


◼ Randomized algorithms often use a pseudorandom number generator.
Motivations for the Randomized Approach
◼ Complexity Reduction: A randomized approach often simplifies complex
problems by introducing probabilistic choices that lead to efficient solutions.
◼ Versatility: Applicable across diverse domains where deterministic solutions
may be impractical or infeasible.
◼ Performance: A randomized approach can offer significant performance
improvements, particularly when dealing with large datasets or complex
systems.
◼ For example, imagine a large library that wants to estimate how often books are
checked out. Instead of tracking every single book’s check-out frequency, which
would be a massive task, the library staff could randomly sample a selection of books
from different genres and record their check-out rates over a period of time. By
analyzing this sample, they can estimate the average check-out frequency for the
entire collection.
Example 1: Coupon Problem

◼ A company selling jeans gives a coupon for each pair of jeans. There are n
different coupons. Collecting n different coupons would give you free jeans.
How many jeans do you expect to buy before getting a free one?
Coupon Problem – Algorithmic Solution

1. Initialize Variables:
◼ Total Jeans Bought: Start with a counter set to zero to track how many
pairs of jeans you have bought.
◼ Coupons Collected: Use a set to keep track of the different types of
coupons you have received.
◼ Number of Coupons: The total number of different coupon types is n.
Coupon Problem – Algorithmic Solution

2. Buying Process:
◼ Loop Until All Coupons Are Collected: Continue buying jeans until you
have one of each type of coupon in your set.
◼ Each time you buy a pair of jeans, increase the counter for the total jeans
bought by one.
◼ When you buy a pair of jeans, you get a coupon. Add this coupon to your
set of collected coupons.
◼ Check if you have collected all n different types of coupons by comparing
the size of your set to n.
Coupon Problem – Algorithmic Solution

3. Repeat for Accuracy:


◼ To get a reliable estimate, repeat the entire buying process many times
(e.g., 100,000 times).
◼ Keep a running total of the number of jeans bought across all these
repetitions.
Coupon Problem – Algorithmic Solution

4. Calculate the Average:


◼ After completing all repetitions, calculate the average number of jeans
bought by dividing the total number of jeans bought by the number of
repetitions.
Coupon Problem – Algorithmic Solution

5. Output the Result:


◼ The average number of jeans bought from the repeated simulations gives
you a good estimate of how many pairs of jeans you would typically need
to buy before collecting all n coupons and getting a free pair.
Coupon Problem – Algorithmic Solution
import random
def expected_jeans(n, num_simulations=100000):
total_jeans = 0
for _ in range(num_simulations):
coupons_collected = set()
jeans_bought = 0
while len(coupons_collected) < n:
jeans_bought += 1
coupon = random.randint(1, n) # simulate getting a random coupon
coupons_collected.add(coupon) # add the coupon to the set
total_jeans += jeans_bought
expected_jeans = total_jeans / num_simulations
return expected_jeans

n = 10 # number of different coupons


expected_num_jeans = expected_jeans(n)
print(“Expected number for getting a free one:”, expected_num_jeans)
Example 2: (Hat Problem)

◼ n people go to a party and drop off their hats to a hat-check person. When the
party is over, a different hat-check person is on duty and returns the n hats
randomly back to each person. What is the expected number of people who get
back their hats.
Hat Problem – Algorithmic Solution

1. Initialization:
◼ Set up variables to count the total number of correct matches across all
simulations.
◼ Define the number of simulations to ensure statistical reliability.

◼ Define the number of people n.


Coupon Problem – Algorithmic Solution

2. Simulate the Process::


◼ For each simulation:

• Create a list of hats representing each person.


• Shuffle the list to simulate random distribution.
• Count how many people receive their own hat.
• Add this count to the total number of correct matches.
Coupon Problem – Algorithmic Solution

3. Calculate the Expected Value:


◼ Divide the total number of correct matches by the number of simulations
to get the average.
Coupon Problem – Algorithmic Solution

4. Output the Result:


◼ Print the expected number of people who get their own hats back.
Coupon Problem – Algorithmic Solution
import random
def simulate_hat_problem(n, num_simulations):
total_correct = 0
for _ in range(num_simulations):
hats = list(range(n))
random.shuffle(hats)
correct = sum(1 for i in range(n) if hats[i] == i)
total_correct += correct
expected_value = total_correct / num_simulations
return expected_value

n = 10 # Number of people at the party


num_simulations = 100000 # Number of simulations to run
expected_hats_back = simulate_hat_problem(n, num_simulations)
print("The expected number of people who get their own hats back is :”, expected_hats_back)
THANK YOU

You might also like