0% found this document useful (0 votes)
60 views16 pages

Daa C4

Dynamic programming solves problems by breaking them down into overlapping subproblems and solving each subproblem only once, saving the results in a table to reuse when needed later. This avoids recomputing solutions and improves efficiency over divide-and-conquer. Examples discussed include the rod-cutting problem, 0-1 knapsack problem, and all-pairs shortest paths problem. Dynamic programming uses either tabulation or memoization to store and retrieve the results of subproblems.

Uploaded by

amanterefe99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views16 pages

Daa C4

Dynamic programming solves problems by breaking them down into overlapping subproblems and solving each subproblem only once, saving the results in a table to reuse when needed later. This avoids recomputing solutions and improves efficiency over divide-and-conquer. Examples discussed include the rod-cutting problem, 0-1 knapsack problem, and all-pairs shortest paths problem. Dynamic programming uses either tabulation or memoization to store and retrieve the results of subproblems.

Uploaded by

amanterefe99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 16

1

Design and Analysis of


Algorithms
CHAPTER FOUR: DYNAMIC PROGRAMMING
Dynamic Programming 2
 Dynamic programming, like the divide-and-conquer method, solves problems by
combining the solutions to subproblems. (“Programming” in this context refers to a
tabular method, not to writing computer code.)
 As we saw in Chapters 2, divide-and-conquer algorithms partition the problem into
disjoint subproblems, solve the subproblems recursively, and then combine their
solutions to solve the original problem.
 In contrast, dynamic programming applies when the subproblems overlap—that is,
when subproblems share sub-subproblems. In this context, a divide-and-conquer
algorithm does more work than necessary, repeatedly solving the common sub-
subproblems.
 A dynamic-programming algorithm solves each sub-subproblem just once and then
saves its answer in a table, thereby avoiding the work of recomputing the answer
every time it solves each sub-subproblem.
DP… 3
 To save results of subproblems in dynamic programming either tabulation or memoization techniques
can be used.
 Let’s grasp the concept of tabulation and memoization, first lets consider the most common example
in programming called Fibonacci series. (try to see the recurrence tree)

Fibonacci series
int fibo(int n){
1. if(n<=1)
2. return n;
3. else
4. return fibo(n-2)+fibo(n-1)
5. }
Memoization 4
 Aka top-down approach used to implement the DP algorithm by solving the highest-level
subproblem and then solve the next sub-problem recursively and the next.
 Suppose there are two sub-problems, sub-problem A and sub-problem B. When sub-problem B is
called recursively, then it can use the solution of sub-problem A, which has already been used. Since
A and all the sub-problems are memoized, it avoids solving the entire recursion tree generated by B
and saves computation time.
 Having this global array for memoization the Fibonacci algorithm can be modified as:

int fibo(int n){


1. if (memo[n] !=- ∞)
int memo[n]; 2. return memo[n];
for i=0 ups to n: 3. If(n<=1)
4. return n;
memo[i]=-∞ 5. else:
6. return(memo[n] =fibo(n-2) + fibo(n-1));
7. }
Tabulation 5
 Aka bottom-up approach is a technique that is used to implement the DP algorithms by solving the
lowest level sub-problem. The solution to the lowest level sub-problem will help to solve next level
sub-problem, and so forth.
 We solve all the sub-problems iteratively until we solve all the sub-problems. This approach saves
the time when a sub-problem needs a solution of the sub-problem that has been solved before.
 Having this global array for tabulation the Fibonacci algorithm can be modified as:

int fibo(int n){


1. memo[0]=0, memo[1]=1;
2. for i=2 ups to n:
int memo[n];
3. memo[n] =memo[n-2] + memo[n-1];
4. return memo[n];
5. }
DP… 6
 We typically apply dynamic programming to optimization problems. Such problems can have many
possible solutions.
 Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum)
value. We call such a solution an optimal solution to the problem, as opposed to the optimal solution,
since there may be several solutions that achieve the optimal value.
 When developing a dynamic-programming algorithm, we follow a sequence of four steps:
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution, typically in a bottom-up fashion.
4. Construct an optimal solution from computed information.
Rod-cutting problem 7
 The rod-cutting problem is the following. Given a rod of length n inches and a table of prices p i for i
= 1,2…n, determine the maximum revenue rn obtainable by cutting up the rod and selling the pieces.
 Note that if the price pn for a rod of length n is large enough, an optimal solution may require no
cutting at all.
 Given:

 Lets examine, the possible ways of cutting rod of length 4


Cont… 8
 The recurrence function can be expressed as:
 Recursive algorithm for rod-cutting will be:
CUT-ROD(p[], n) // p is price and n is rod-length
1. if (n == 0)
2. return 0
3. maxProfit = -∞
4. for i = 1 ups to n
5. maxProfit = max(maxProfit, p[i] + CUT-ROD(p, n - i))
6. return maxProfit
Memoized rod-cutting 9
 Memoized rod cutting in top-down approach
MEMOIZED-CR(p[], n) MEMOIZED-CRA(p[], n, r[])
1. if r[n] ≥ 0
1. Let r[0…n] be a new array
2. return r[n]
2. for i = 0 ups to n 3. if n == 0
4. mp = 0
3. r[i] = -∞
5. else
4. return MEMOIZED-CRA(p, n, r) 6. mp = -∞
7. for i = 1 ups to n
8. mp = max(mp, p[i] + MEMOIZED-CRA(p, n-i, r)
9. r[n] = mp
10. return mp
Tabulated rod-cutting 10
 Tabulated rod-cutting in bottom-up approach.
BOTTOM-UP-CUT-ROD(p[], n)
1. Let r[0…n] be a new array
2. r[0] = 0
3. for j = 1 ups to n
4. q = -∞
5. for i = 1 ups to j
6. q = max(q, p[i] + r[j-i]
7. r[j] = q
8. return r[n]
0-1 knapsack problem 11
 The value of the solution to i items either include ith item, in which case it is vi plus a subproblem
solution for (i - 1) items and the weight excluding w i, or does not include ith item, in which case it is a
subproblem's solution for (i - 1) items and the same weight.
 That is, if the thief picks item i, thief takes v i value, and thief can choose from items w - w i, and get c[i - 1, w
- wi] additional value.
 On other hand, if thief decides not to take item i, thief can choose from item 1,2, . . . , i- 1 up to the weight
limit w, and get c[i - 1, w] value. The better of these two choices should be made.
 We can express this fact in the following formula: define c[i, w] to be the solution for items 1,2, . . . ,
i and maximum weight W.
 C[i,w]=

1≤ w,i ≤W,N where N is no items


0-1 knapsack… 12
 Tabulated 0-1 knapsack in bottom-up approach.
// v and w are arrays of value and weight of items respectively
// n is total number of items
// prepare Mat an n χ kc 2D array (or matrices) to store intermediate values

knapsack(v, w, n, kc) // kc is knapsack capacity


1. for ckc=0 ups to kc
2. Mat[0, ckc]=0
3. for i = 1 ups to n
4. for ckc=0 ups to kc // ckc is current knapsack capacity
5. if (w[i] ≤ ckc)
6. Mat[i, ckc]=MAX(Mat[i-1, ckc], v[i] + Mat[i-1, ckc-w[i])
7. else
8. Mat[i, ckc] =Mat[i-1,ckc]
9. return Mat[n, W]
All-pairs shortest path 13
 Find a shortest path from u to v for every pair of vertices u and v.
 Although we can solve this problem by running a single source algorithm once from each vertex, we
usually can solve it faster.
 In this section, we shall use a different dynamic-programming formulation to solve the all-pairs
shortest-paths problem on a directed graph G = (V, E). The resulting algorithm, known as the Floyd-
Warshall algorithm, runs in θ(V3) time.
 In Floyd-Warshall algorithm, negative-weight edges may be present, but we assume that there are no
negative-weight cycles.
Cont… 14

 Let G=(V,E) be a directed graph with n vertices. The cost of the vertices are represented by an
adjacency matrix such that 1≤i≤n and 1≤j≤n
1. cost(i, i) = 0,
2. cost(i, j) is the length of the edge (i, j) if (i, j) is in E and
3. cost(i, j) = ∞ if (i, j) is not in E.
 All pairs shortest path problem is to find a matrix A such that A[i][j] is the length of the shortest path
from i to j. Initially we set A[i][j] = c[i][j], representation of the adjacency matrix of a graph.
 Algorithm makes n passes over A. Let A0, A1, .. An represent the matrix on each pass.
 Let Ak-1[i,j] represent the smallest path from i to j passing through no intermediate vertex greater than
k-1. This will be the result after k-1 iterations. Hence k th iteration explores whether k lies on optimal
path.
Cont… 15

 A shortest path from i to j passes through no vertex greater than k, either it goes through k or does
not.
1. If it does , Ak[i,j] = Ak-1[i,k] + Ak-1[k,j]
2. If not, then no intermediate vertex has index greater than k-1. Then Ak[i,j] = Ak-1[i,j]
 Combining these two conditions we get
 Ak[i,j] = min{ Ak-1[i,j], Ak-1[i,k] + Ak-1[k,j]}, k>=1 (A0[i,j]= cost(i, j))
 Tabulated version of the Floyd-Warshall shortest path algorithm

Floyd-Warshall SP(A[][], int n){ //A[][] is the adjacency matrix representation of the input graph
1.for (int k=1; k<=n; k++) // n is number of vertices
2. for (i=1; i<=n; i++)
3. for (j=1; j<=n; j++)
4. A[i][j] = min{A[i][j], A[i][k] + A[k][j]);
5.}
Cont… 16

 Find all-pairs shortest path using Floyd-Warshall’s algorithm for the graph below.

You might also like