0% found this document useful (0 votes)
3 views42 pages

Daa Unit 3 Anits

The document discusses various algorithmic techniques, including presorting, Gaussian elimination, balanced search trees, and heapsort. It highlights how presorting can optimize algorithms by sorting input data, and explains Gaussian elimination for solving linear equations. Additionally, it covers the properties and operations of 2-3 trees and the efficiency of heapsort, concluding with Horner's rule for polynomial evaluation.

Uploaded by

rajkirannaidu123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views42 pages

Daa Unit 3 Anits

The document discusses various algorithmic techniques, including presorting, Gaussian elimination, balanced search trees, and heapsort. It highlights how presorting can optimize algorithms by sorting input data, and explains Gaussian elimination for solving linear equations. Additionally, it covers the properties and operations of 2-3 trees and the efficiency of heapsort, concluding with Horner's rule for polynomial evaluation.

Uploaded by

rajkirannaidu123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Design and Analysis of

Algorithms
UNIT 3

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Transform and conquer

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Presorting is a technique used in
computer science to simplify and
optimize various algorithms by sorting
the input data beforehand.

Presorting 1. Introduction to Presorting:


1. Presorting involves sorting the
input data before performing
further operations.

2. Sorting makes it easier to solve


certain problems and can lead to
more efficient algorithms.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Example 1: Checking Element
Uniqueness:
• Problem: Determine if there are any
duplicate elements in an array.
• Brute-force approac h: Com pare
every pair of elements in the array, PresortElementUniqueness(A[0..n − 1])
which has a worst-case efficiency of sort the array A
O(n2).
for i ← 0 to n − 2 do
• Presorting approach: if A[i] = A[i + 1] return false
• Sort the array first. return true
• Check consecutive elements for
equality.
• The efficiency of this approach is
O(nlogn) due to sorting.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Conclusion:
Example 2: Searching Problem:
1. Presorting is a powerful technique
• Problem: Search for a value in a that simplifies and optimizes
array of sortable elements. algorithms by sorting input data.
• Brute-force approach: Use sequential
search, which has a worst-case 2. I t ' s c o m m o n l y u s e d i n v a r i o u s
efficiency of O(n). algorithms, including those dealing
• Presorting approach: with geometric problems and
• Sort the array first. directed acyclic graphs.
• Use binary search, which has a
worst-case efficiency of O(logn). 3. P r e s o r t i n g c a n s i g n i f i c a n t l y
improve the efficiency of
• The efficiency of this approach is
algorithms, especially when
O(nlogn) due to sorting and
combined with other techniques
searching. like binary search.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Gaussian elimination is a method used to
solve systems of linear equations by
transforming the system into an equivalent
upper-triangular form.

Here's a step-by-step explanation of how


Gaussian elimination works, along with an
example:

Gaussian Elimination Introduction to Gaussian Elimination:


1. Gaussian elimination transforms a
system of linear equations into an
equivalent system with an upper-
triangular coefficient matrix.
2. The upper-triangular form simplifies
the process of finding the solution by
allowing for easy back substitution.
3. T h e m e t h o d i n v o l v e s a s e r i e s o f
elementary row operations.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Forward Elimination:
• The first step is to perform forward
elimination to introduce zeros below
the diagonal.

• We start with the first column as the


pivot column and eliminate the
coefficients below it.

• We eliminate the coefficients below the


diagonal by subtracting multiples of
the pivot row from the rows below.

• If the pivot element is zero, we


perform partial pivoting by swapping
rows to ensure a non-zero pivot.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Forward Elimination:
Back Substitution:
Step 1: Choose the Pivot Column
• Once the system is in upper- We start with the first column as the pivot
triangular form, we can solve it by column.
back substitution.
Step 2: Eliminate Coefficients Below the
• We start with the last equation to Pivot
find the value of the last variable,
then substitute it back into the Eliminate x 1 coefficients below the first
previous equations to find the values equation:
of the other variables. R2- R1(4/2)
2x1 – x2 + x3 = 1
R3- R1(1/2)
Example System of Equations: Consider 4x1 + x2 – x3 = 5
x1 + x2 +x3 = 0
the system of equations:
● 2x1 – x2 + x3 = 1 ● 2x1 – x2 + x3 = 1
● 4x1 + x2 – x3 = 5 ● 0x1 + 3x2 – 3x3 = 3
● x1 + x2 +x3 = 0 ● 0x1 + 3/2x2 + 1/2x3 = -1/2

Step 3: Move to the Next Pivot Column


Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Back Substitution:
Step 1: Solve for x3
From the last equation, we can directly
● Now, we move to the second column solve for x3:
as the pivot column. 2x3 = -2 => x3 = -1
Eliminate the x 2 coefficient below the
second equation: Step 2: Substitute x3 to Solve for x2
S u b st i t u t i ng x 3 = − 1 i nt o t h e s e c o n d
2x1 – x2 + x3 = 1 equation:
0x1 + 3x2 – 3x3 = 3 3x2−3(−1)=3⟹3x2+3=3⟹x2=0
0x1 + 3/2x2 + 1/2x3 R3
= -1/2
– R2(1/2)
Step 3: Substitute x3 and x2 to Solve for
x1
● 2x1 – x2 + x3 = 1 Substituting x3=−1 and x2=0 into the first
● 0x1 + 3x2 – 3x3 = 3 equation:
● 0x1 + 0x2 + 2x3 = -2 2x1−0+(−1)=1⟹2x1−1=1⟹2x1=2⟹x1=1

So, the solution to the system of equations


is
x1 =1 x2 = 0 and x3 = -1
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Time Complexity Analysis:
• The number of operations performed
during forward elimination grows
with the size of the input (number of Implications of Cubic Time Complexity:
equations and variables).
Performance Considerations:
• As e a c h r ow r e q u i r e s o p e r a t i o n s 1. Cubic time complexity implies that
proportional to the number of the algorithm's runtime increases
remaining rows, the total number of significantly as the size of the input
operations can be represented by a grows.
cubic function of the input size. 2. F o r l a r g e s y s t e m s o f e q u a t i o n s ,
Gaussian elimination may become
• Therefore, the time complexity of computationally expensive and
Ga u ssi an el im inat i on i s t yp ic al ly impractical due to i ts cubic ti me
expressed as O(n3), where 'n' complexity.
represents the number of equations
or variables in the system.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Introduction to Balanced Search Trees:
• Balanced search trees are data
structures designed to maintain the
logarithmic efficiency of dictionary
operations (sea rchi ng , in sert ion,
deletion) while preventing worst-case
degeneracy.

• They address the limitation of


Balanced Search Trees classical binary search trees, where
unbalanced trees could lead to linear
time complexity in worst-case
scenarios.

Instance-Simplification Approach:
• This approach involves transforming
an unbalanced binary search tree
into a balanced one, hence termed
self-balancing trees.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Introduction to 2-3 Trees:
• A 2-3 tr ee is a type of bala nced
se ar c h tr ee t hat al low s nod es t o Node Structure and Property:
contain either one or two keys. • In a 2-node, the left child contains
keys smaller than the node's key, and
• I n t r o du c e d b y J o h n H o p c r o f t i n the right child contains keys larger
1970, it addresses the issue of than the node's key.
unbalance d trees by maintaining
perfect height balance. • In a 3-node, the left child contains
keys smaller than the first key, the
• Nodes in a 2-3 tree can be of two middle child contains keys between
types: 2-nodes containing one key the first and second keys, and the
and two children, and 3-nodes right child contains keys larger than
containing two keys and three the second key.
children.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Height Balancing:
• A crucial property of 2-3 trees is that
all leaves must be at the same level,
ensuring perfect height balance.
• This property is maintained by allowing Insertion Operation:
nodes to contain multiple keys, thus
avoiding degenerate trees.
• Inserting a new key in a 2-3 tree
involves finding the appropriate leaf
Searching in 2-3 Trees: node through a search operation.
• Searching for a key in a 2-3 tree starts
from the root and progresses down the • If the leaf is a 2-node, the new key is
tree.
inserted to maintain order.
• If the r oot i s a 2-node , t he se a rc h
c ont inu e s r ec u r si v e l y ba s e d o n k e y • If the leaf is a 3-node, it is split into
comparisons. two nodes, and the middle key is
promoted to the parent node.
• If the r oot i s a 3-node , t he se a rc h
n a r r o ws d o w n t o o n e o f t h e t h r e e
subtrees based on key comparisons.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


3-node
2-node
K1, K2
K

(K1,K
<K >K < K1 > K2
2)

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Height Bounds and Efficiency:
1. The height of a 2-3 tree is bounded
by the logarithmic function of the
number of nodes.

Example of 2-3 Tree Construction: 2. Both upper and lower bounds on the
height ensure that searching,
• Figure illustrates the construction
insertion, and deletion operations
of a 2-3 tree for a given list of
have logarithmic Ө(nlogn) time
keys: 9, 5, 8, 3, 2, 4, 7.
complexity in both worst and
• Each step in the construction average cases.
involves inserting a key into the
tree while maintaining balance and 3. This is because, for each of the n
order. elements, you may need to traverse
O(logn) levels to reach it.

Generalization to B-Trees:
1. B - t r e e s a r e a s i g n i f i c a n t
generalization of 2-3 trees, allowing
for more keys per node and greater
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
flexibility in tree structure.
1. Algorithm Overview:
1. Heapsort is a sorting algorithm
that consists of two stages:
1. S t a ge 1: C o n s t r u c t a h ea p
from the given array.
2. Stage 2: Perform maximum
deletions (extracting the
maximum element from the
heap) n−1 times, resulting in a
Heap Sort sorted array.

2. Heap Construction:
1. In Stage 1, a heap is constructed
from the given array using the
bottom-up heap construction
algorithm.
2. This stage has a time complexity of
O ( n ), where n is the number of
elements in the array.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Efficiency Analysis:
3. Maximum Deletions: • The time complexity of Stage 2
• In Stage 2, the root element depends on the number of key
(m ax i mu m element) i s delet ed comparisons needed to delete the root
from the heap n−1 times. element each time.
• Each deletion operation rearranges • The number of key comparisons, C(n),
the heap to maintain the heap c a n b e b o u n d e d b y
property. 2log2(n−1)+2log2(n−2)+...+2log2(1).
• The deleted elements are placed at • This sum can be simplified and
the end of the array. bounded by 2nlog2n.
• Thus, the time complexity of Stage 2
Example: 2,9,7,6,5,8 is O(nlogn).
• Combining both stages, the overall
time complexity of heapsort is
O(n)+O(nlogn)=O(nlogn).

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Horner’s Rule ALGORITHM Horner(P[0..n],x)
// Evaluates a polynomial at a given
point by Horner’s rule
// Input: An array P[0..n] of coefficients
of a polynomial of degree n,
• Horner's rule is a technique used for //store from the lowest to the highest
efficiently evaluating polynomials.
and a number x
• It's particularly useful because it // Output: The value of the polynomial
reduces the number of arithmetic at x
operations required to compute the
valu e of a p olyn om i al at a g iv en
point. p ← P[n]
for i ← n−1 down to 0 do
• This process allows us to avoid the
p ← x * p + P[i]
need for explicit expansion of the
polynomial, making it a powerful return p
tool in polynomial evaluation.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Explanation:
• Initialization: We initialize the Example:
coefficients of the polynomial and the Problem: p(x)=2x4−x3+3x2+x−5 at x=3.
value of x.
Step 1: Initialize Coefficients and x
• Horner's Rule: We apply Horner's rule by
Coefficients: 2 -1 3 1 -5
successively multiplying the current
result by x and adding the next x = 3
coefficient until we reach the end.
Step 2: Apply Horner's Rule
• Interpretation: Each step in the
calculation corresponds to evaluating a
• We multiply the previous result by
term of the polynomial, resulting in the
final value of p(x) at the given point x=n. the value of x, which is 3.
• Initial result: 2
• Efficiency: Horner's rule allows us to • Multiply by x=3: 2×3=6
evaluate the polynomial with only n • We then add the next coefficient,
multiplications and n additions, making which is −1.
it highly efficient compared to brute-
• Add coefficient: 6+(−1)= 5
force methods.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


• Next, we multiply the previous result (5)
by x=3 again.
• Multiply by x=3: 5×3=15
• We add the next coefficient, which is 3.
• Add coefficient: 15+3=18
Final Result:
• Next, we multiply the previous result • After performing the above iterations
(18) by x=3 again. for all coefficients, the final result is
• Multiply by x=3: 18×3= 54
160.
• We add the next coefficient, which is 1.
• Add coefficient: 54+1=55 • This final result represents the value
of the polynomial p(x) at x=3, which
• Next, we multiply the previous result is p(3)=160.
(55) by x=3 again.
• Multiply by x=3: 55×3=165
• We add the next coefficient, which is -5.
• Add coefficient: 165+(-5)=160

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Binary exponentiation is a highly efficient
algorithm used for computing large powers
of a number.

It is particularly useful in scenarios where


traditional exponentiation methods, such as
repeated multiplication, become
computationally expensive, especially for
large exponents.

Efficiency:
Binary Exponentiation • Binary exponentiation has a time
complexity of O (log n ), where n is the
exponent.
• This makes it significantly faster than
the naive approach of repeated
multiplication, which has a time
complexity of O(n).
• As a result, binary exponentiation is
much faster, especially for large
exponents.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept Two Approaches: Left to Right, Right to
Left
1. Identify the Problem: Start by

Problem
understanding the problem you need to
solve. This could be a complex task or a
challenge that you're facing.

Reduction 2. Find a Known Problem: Look for a


problem that you already know how to
so lv e o r o ne th a t i s s i m p ler t o so l v e
compared to the original problem. This
known problem should be related to the
original problem in some way.
Problem reduction is a problem-
3. Reduce the Original Problem: Develop a
solving strategy that involves
strategy to reduce the original problem to
simplifying a complex problem by the known or simpler problem. This may
reducing it to a known or easier involve transforming the original problem
problem. Here's a step-by-step or breaking it down into smaller, more
explanation with an example: manageable parts.

4. Apply the Solution: Solve the known or


simpler problem using an appropriate
algorithm or method that you have
available.

5. Translate the Solution: Once you have


solved the simpler problem, translate the
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept solution back to the original problem
domain, if necessary.
Dynamic Programming

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● Dynamic programming (DP) is a 1. Optimal Substructure: The problem
powerful algorithmic technique used can be broken down into smaller,
to solve problems by breaking them independent subproblems, and the
down into simpler subproblems and optimal solution to the original
solving each subproblem only once. problem can be constructed from the
optimal solutions to its subproblems.
● It is especially useful for
optimization problems where the 2. Overlapping Subproblems:
goa l i s to fi nd th e bes t so l ut i on • The problem can be decomposed into
among a set of feasible solutions. subproblems that are reused multiple
times during the computation.
● In the context of Design and
Analysis of Algorithms (DAA), • Rather than solving the same
d yn a m i c p r o g r a m m i n g i s o f t e n su bp r o b le m r e p e a t edl y , dy n a m i c
applied to problems that exhibit the programming stores the solutions to
following properties: subproblems in a table (often called a
memoization table) and reuses them
when needed.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
To understand how dynamic
programming optimizes the
computation of binomial coefficients
compared to br ute for ce, let' s fir st
define the bi nomial coeffi ci ent ,
𝑛
𝑟
which represents the number of ways to
c h oo s e r el e m e n t s fr o m a s e t of 𝑛
distinct elements without regard to the
Computing a Binomial Coefficient order. It is calculated using the formula:

=
𝑛 𝑛!
𝑟 𝑟!(𝑛−𝑟)!

The dynamic programming approach


leverages the insights from Pas cal' s
Triangle to efficiently compute binomial
coefficients by iteratively filling in the
memoization table, starting from the
base cases and building up to the desired
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept coefficient.
Moreover, this pattern can be derived from the
Pascal’s triangle. Let’s visualize the DP table for the above compu

r/n 0 1 2 3

0 1 0 0 0

1 1 1 0 0

2 1 2 1 0

3 1 3 3 1

4 1 4 6 4

5 1 5 10 10

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Dynamic Programming
Approach:

• Dynamic programming optimizes ● The time complexity of the


the computation of binomial dynamic programming approach is
coefficients by exploiting the 𝑂 (𝑛 ⋅ r), as it computes each
principle of overlapping entry of the memoization table
subproblems and optimal only once.
substructure.
● The cost of the algorithm is filling
• It breaks down the problem into out the table, Addition is the basic
smaller subproblems and computes
operation.
the binomial coefficient for each
subproblem only once, storing the
results in a memoization table. ● Because r ≤ n, the sum needs to be
split into two parts because only
• By memoizing the results of half the table needs to be filled out
subproblems, dynamic for i.
programming avoids redundant
computations and ensures that
each subproblem is solved only
Prepared by once.
M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● Warshall’s Algorithm converts a
given Adjacency matrix to a path
Matrix.

Warshall’s Algorithm ● An adjacency matrix will give the


information of what vertices are
adjacent to what other vertices.

● A path matrix will give the


information on what vertices are
reachable from other vertices.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Adjacency Warshal’s Path
Adjacency matrix Algorithm matrix
matrix
copied to
Loop is for path matrix. Warshall’s algorithm says that if there has
intermediat
e vertex k. to be a path from vertex i to vertex j,
there may be a direct edge from I to j or
there may be a path from I to j through k
(k has to be varied from 1 to n where n
vertices are there in a graph.)

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● It is also called as all pair shortest
path algorithm.

● In a given connected, directed and


weighted graph G, the objective of
this algorithm is to find the
minimum cost from any vertex to
any other vertex in the graph G.

Floyd’s Algorithm ● From a vertex i to j, there may be


many paths but we have to find
minimum value applying Dynamic
programming principle.

● A vertex k is considered as a vertex


which comes in between the path
from vertex i to vertex j, now the
path value (cost[i,k]+cost[k,j]) is
calculated and check if it is
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept minimum.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● A binary search tree is a binary tree
in which each node is an identifier.

● We have to arrange the


keys/elements in the form of a tree
such that the cost of searching the
elements is minimum.
Optimal Binary Search
● This tree is called as Optimal Binary
Trees Search Tree.

● All the identifiers in left subtree are


lesser than the identifier at root
node.

● All the identifiers in right subtree


are greater that the identifier at
root node.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● To find whether an identifier ‘X’ is
present or not
● Let us assume that given set of
1. X is compared with the root node. id e n t i fi er s { a 1, a2 ,a 3 ,… an } wh er e
a1<a2<…<an and let
2. If X is less than identifier at root
node then search continues on left 1. P[i] be the probability of successful
side tree. search of an identifier ai, 1<= I <= n

3. If X is greater than identifier at 2. Q[i] be the probability of an


root node then search continues on unsuccessful search of an identifier.
right side tree.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


The formula we use is

C[i,j] = min { C[i,k-1] + C[k+1, j]


Example: Construct an optimal binary + ∑𝑗𝑖 𝑃𝑠 }
search tree for given set of keys. i<k<=j
• Here C(i, i-1) = 0 for 1<i<n+1
[represents empty tree]
Key 1 2 3 4 • C(i,i) = Pi 1< I=i <n [represents one
node tree]
Probability 0.1 0.2 0.4 0.3
• Root (i,i) = i

• We have to construct two tables one


is Cost table and the other is Root
table.
• Root table records the value of k for
which it is minimum.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
C(i, i) = Pi R(i, i) = i
C(i, i-1)=0
j = 0 to n

0 1 2 3 4 0 1 2 3 4

1 0 0.1 0.4 1.1 1.7 1 1 2 3 3

2 0 0.2 0.8 1.4 2 2 3 3

3 0 0.4 1.0 3 3 3

4 0 0.3 4 4

5 0 5

i = 1 to n+1

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


First find min cost for ● Thus the average number of key
● C[1,2] comparisons in the OBST is equal to
1.7 i.e from C(1,4)
● S i nce R ( 1,4) = 3, t he r o ot of t h e
OBST contains third key as root node
i.e. 3.
● C[2,3] ● Since it’s a binary tree, its left subtree
● C[3,4] is made up of keys 1 & 2 and right
sub tree contains just key 4.
Next find min cost for ● Among R(1,2) =2 so 2 is root and 1
● C[1,3] is leaf.
● C[2,4] 3

Next find min cost for 2 4


● C[1,4]

1
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
The Knapsack Problem and Memory Functions

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


1. Identify the Problem: We are given 𝑛 n
items with known weights 𝑤 1,𝑤 2,...,𝑤 𝑛
and values 𝑣 1,𝑣 2,...,𝑣 𝑛 , along with a
knapsack of capacity 𝑊. The goal is to find
the most valuable subset of items that can
fit into the knapsack without exceeding its
capacity.

2. Define Subproblems: We define the


Knapsack using subproblem 𝐹 (𝑖 ,𝑗 ) as the maximum value
Dynamic Programming that can be obtained with items 1,2,.. and
a knapsack capacity of 𝑗 .

Top Down Approach 3. The principle of the Knapsack is to fill the


knapsack with minimum weight and
maximum cost(profit)

4. Time and Space Complexity: The time


complexity of this algorithm is 𝑂 (𝑛 𝑊 ),
where 𝑛 is the number of items and 𝑊 is
the capacity of the knapsack. The space
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept complexity is also 𝑂 (𝑛 𝑊) since we need to
store the values of 𝐹 (𝑖 ,𝑗 ) in a table.
Example: Apply Dynamic Programming
Capacity
Technique to the following instance of j M=5
Knapsack problem with capacity M =5
0 1 2 3 4 5

Item i Weight Profit Pi i


0 0 0 0 0 0 0
wi
1 2 12$ 1 0 0 12 12 12 12

2 1 10$ 2 0 10 12 22 22 22

3 3 20$ 3 0 10 12 22 30 32

4 2 15$ 4 0 10 15 25 30 37
N=4
Items Profit Table

(x1,x2,x3,x4) = (1,1,0,1) = Optimal


37 solutio
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● Dynamic programming often involves
c om p u t i n g s o l u t i o ns t o o v e r la p p i ng
subproblems.

● In some cases, a significant portion of


these subproblems may have already been
solved and stored in the table.

Knapsack using ● By using the memory function method,


we can skip recalculating these solutions
Memory Functions and instead retrieve them directly from
the table.

Bottom Up Approach ● It combines the flexibility of the top-


down approach with the efficiency of
avoiding redundant calculations from the
bottom-up approach.

● This saves computation time, especially


for large instances of the problem where
the number of subproblems can be
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
immense.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Here's a clearer perspective:

Dynamic Programming:
• In dynamic programming, we solve
0 1 2 3 4 5 subproblems bottom-up and fill in a table
with solutions.
0 0 0 0 0 0 0 • This ensures that all required subproblems
are solved, but it might involve solving
1 0 0 12 12 12 12 more subproblems than necessary.

Memory Function Method:


2 0 - 12 22 - 22
• This method, also known as memoization,
opt i mi zes dy n ami c pr o gr am mi ng by
3 0 - - 22 - 32
storing the results of solved subproblems.
• When a subproblem needs to be solved,
4 0 - - - - 37 we first check if its solution is already
stored in the table. If it is, we retrieve it;
if not, we calculate it.
• This avoids redundant calculations for
subproblems whose solutions are already
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept known.

You might also like