Open In App

Dynamic Programming (DP) Introduction

Last Updated : 28 Oct, 2024
Summarize
Comments
Improve
Suggest changes
Like Article
Like
Save
Share
Report
News Follow

Dynamic Programming is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of subproblems so that we do not have to re-compute them when needed later. This simple optimization typically reduces time complexities from exponential to polynomial. Some popular problems solved using Dynamic Programming are Fibonacci Numbers, Diff Utility (Longest Common Subsequence), Bellman–Ford Shortest Path, Floyd Warshall, Edit Distance and Matrix Chain Multiplication.

dynamic-programming

How Does Dynamic Programming (DP) Work?

  • Identify Subproblems: Divide the main problem into smaller, independent subproblems.
  • Store Solutions: Solve each subproblem and store the solution in a table or array.
  • Build Up Solutions: Use the stored solutions to build up the solution to the main problem.
  • Avoid Redundancy: By storing solutions, DP ensures that each subproblem is solved only once, reducing computation time.

Example of Dynamic Programming (DP)

Example 1: Consider the problem of finding the Fibonacci sequence:

Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, …

Brute Force Approach:

To find the nth Fibonacci number using a brute force approach, you would simply add the (n-1)th and (n-2)th Fibonacci numbers. This would work, but it would be inefficient for large values of n, as it would require calculating all the previous Fibonacci numbers.

Dynamic Programming Approach:

  • Subproblems: F(0), F(1), F(2), F(3), …
  • Store Solutions: Create a table to store the values of F(n) as they are calculated.
  • Build Up Solutions: For F(n), look up F(n-1) and F(n-2) in the table and add them.
  • Avoid Redundancy: The table ensures that each subproblem (e.g., F(2)) is solved only once.

By using DP, we can efficiently calculate the Fibonacci sequence without having to recompute subproblems.

When to Use Dynamic Programming (DP)?

Dynamic programming is an optimization technique used when solving problems that consists of the following characteristics:

1. Optimal Substructure:

Optimal substructure means that we combine the optimal results of subproblems to achieve the optimal result of the bigger problem.

Example:

Consider the problem of finding the minimum cost path in a weighted graph from a source node to a destination node. We can break this problem down into smaller subproblems:

  • Find the minimum cost path from the source node to each intermediate node.
  • Find the minimum cost path from each intermediate node to the destination node.

The solution to the larger problem (finding the minimum cost path from the source node to the destination node) can be constructed from the solutions to these smaller subproblems.

2. Overlapping Subproblems:

The same subproblems are solved repeatedly in different parts of the problem.

Example:

Consider the problem of computing the Fibonacci series. To compute the Fibonacci number at index n, we need to compute the Fibonacci numbers at indices n-1 and n-2. This means that the subproblem of computing the Fibonacci number at index n-1 is used twice in the solution to the larger problem of computing the Fibonacci number at index n.

Approaches of Dynamic Programming (DP)

Dynamic programming can be achieved using two approaches:

1. Top-Down Approach (Memoization):

In the top-down approach, also known as memoization, we start with the final solution and recursively break it down into smaller subproblems. To avoid redundant calculations, we store the results of solved subproblems in a memoization table.

2. Bottom-Up Approach (Tabulation):

In the bottom-up approach, also known as tabulation, we start with the smallest subproblems and gradually build up to the final solution. We store the results of solved subproblems in a table to avoid redundant calculations.

Types of the approach of dynamic programming algorithm

Please refer Tabulation vs Memoization for the detailed differences.

How to solve a Dynamic Programming Problem?

Steps to solve a Dynamic programming problem:

  1. Identify if it is a Dynamic programming problem.
  2. Decide a state expression with the Least parameters.
  3. Formulate state and transition relationships.
  4. Do tabulation (or memorization).

Please refer Steps for how to solve a Dynamic Programming Problem for details.

How to solve Dynamic Programming problems through Fibonacci Example?

Problem: Let’s find the Fibonacci sequence up to the nth term. A Fibonacci series is the sequence of numbers in which each number is the sum of the two preceding ones. For example, 0, 1, 1, 2, 3, and so on. Here, each number is the sum of the two preceding numbers.

Naive Approach: The basic way to find the nth Fibonacci number is to use recursion.

Below is the implementation for the above approach:

C++
// C++ code for the above approach:
#include <iostream>
using namespace std;

// Function to find nth fibonacci number
int fib(int n)
{
    if (n <= 1) {
        return n;
    }
    int x = fib(n - 1);
    int y = fib(n - 2);

    return x + y;
}

// Drivers code
int main()
{
    int n = 5;

    // Function Call
    cout << fib(n);
    return 0;
}
Java
// Java code for the above approach:
import java.io.*;

class GFG {
    // Function to find nth fibonacci number
    public static int fib(int n)
    {
        if (n <= 1) {
            return n;
        }
        int x = fib(n - 1);
        int y = fib(n - 2);

        return x + y;
    }

    // Driver Code
    public static void main(String[] args)
    {
        int n = 5;

        // Function Call
        System.out.print(fib(n));
    }
}

// This code is contributed by Rohit Pradhan
Python
# Function to find nth fibonacci number
def fib(n):
    if (n <= 1):
        return n
    x = fib(n - 1)
    y = fib(n - 2)

    return x + y

n = 5;

# Function Call
print(fib(n))

#contributed by akashish__
C#
using System;

public class GFG {

    // Function to find nth fibonacci number
    public static int fib(int n)
    {
        if (n <= 1) {
            return n;
        }
        int x = fib(n - 1);
        int y = fib(n - 2);

        return x + y;
    }

    static public void Main()
    {

        int n = 5;

        // Function Call
        Console.WriteLine(fib(n));
    }
}
// contributed by akashish__
JavaScript
// Function to find nth fibonacci number
function fib(n)
{
    if (n <= 1) {
        return n;
    }
    let x = fib(n - 1);
    let y = fib(n - 2);

    return x + y;
}

// Drivers code
let n = 5;

// Function Call
console.log(fib(n));

// This code is contributed by akashish__

Output
5

Complexity Analysis: 

Time Complexity: O(2n)

  • Here, for every n, we are required to make a recursive call to fib(n – 1) and fib(n – 2). For fib(n – 1), we will again make the recursive call to fib(n – 2) and fib(n – 3). Similarly, for fib(n – 2), recursive calls are made on fib(n – 3) and fib(n – 4) until we reach the base case.
  • During each recursive call, we perform constant work(k) (adding previous outputs to obtain the current output). We perform 2nK work at every level (where n = 0, 1, 2, …). Since n is the number of calls needed to reach 1, we are performing 2n-1k at the final level. Total work can be calculated as:
  • If we draw the recursion tree of the Fibonacci recursion then we found the maximum height of the tree will be n and hence the space complexity of the Fibonacci recursion will be O(n).

Efficient approach: As it is a very terrible complexity(Exponential), thus we need to optimize it with an efficient method. (Memoization)

Let’s look at the example below for finding the 5th Fibonacci number.

Representation of 5th Fibonacci number

Observations:

  • The entire program repeats recursive calls. As in the above figure, for calculating fib(4), we need the value of fib(3) (first recursive call over fib(3)), and for calculating fib(5), we again need the value of fib(3)(second similar recursive call over fib(3)). 
  • Both of these recursive calls are shown above in the outlining circle. 
  • Similarly, there are many others for which we are repeating the recursive calls. 
  • Recursion generally involves repeated recursive calls, which increases the program’s time complexity. 
  • By storing the output of previously encountered values (preferably in arrays, as these can be traversed and extracted most efficiently), we can overcome this problem. The next time we make a recursive call over these values, we will use their already stored outputs instead of calculating them all over again. 
  • In this way, we can improve the performance of our code. Memoization is the process of storing each recursive call’s output for later use, preventing the code from calculating it again. 

Way to memoize: To achieve this in our example we will simply take an answer array initialized to -1. As we make a recursive call, we will first check if the value stored in the answer array corresponding to that position is -1. The value -1 indicates that we haven’t calculated it yet and have to recursively compute it. The output must be stored in the answer array so that, next time, if the same value is encountered, it can be directly used from the answer array.   

Now in this process of memoization, considering the above Fibonacci numbers example, it can be observed that the total number of unique calls will be at most (n + 1) only.

Below is the implementation for the above approach:

C++
// C++ code for the above approach:
#include <iostream>
using namespace std;

// Helper Function
int fibo_helper(int n, int* ans)
{

    // Base case
    if (n <= 1) {
        return n;
    }

    // To check if output already exists
    if (ans[n] != -1) {
        return ans[n];
    }

    // Calculate output
    int x = fibo_helper(n - 1, ans);
    int y = fibo_helper(n - 2, ans);

    // Saving the output for future use
    ans[n] = x + y;

    // Returning the final output
    return ans[n];
}

int fibo(int n)
{
    int* ans = new int[n + 1];

    // Initializing with -1
    for (int i = 0; i <= n; i++) {
        ans[i] = -1;
    }
    fibo_helper(n, ans);
}

// Drivers code
int main()
{
    int n = 5;

    // Function Call
    cout << fibo(n);
    return 0;
}
Java
// Java code for the above approach:
import java.io.*;

class GFG {
    // Helper Function
    public static int fibo_helper(int n, int ans[])
    {

        // Base case
        if (n <= 1) {
            return n;
        }

        // To check if output already exists
        if (ans[n] != -1) {
            return ans[n];
        }

        // Calculate output
        int x = fibo_helper(n - 1, ans);
        int y = fibo_helper(n - 2, ans);

        // Saving the output for future use
        ans[n] = x + y;

        // Returning the final output
        return ans[n];
    }

    public static int fibo(int n)
    {
        int ans[] = new int[n + 1];

        // Initializing with -1
        for (int i = 0; i <= n; i++) {
            ans[i] = -1;
        }
        return fibo_helper(n, ans);
    }

    // Driver Code
    public static void main(String[] args)
    {
        int n = 5;

        // Function Call
        System.out.print(fibo(n));
    }
}

// This code is contributed by Rohit Pradhan
Python
# Helper Function
def fibo_helper(n, ans):
  # Base case
  if (n <= 1):
    return n

  # To check if output already exists
  if (ans[n] is not -1):
    return ans[n]

  # Calculate output
  x = fibo_helper(n - 1, ans)
  y = fibo_helper(n - 2, ans)

  # Saving the output for future use
  ans[n] = x + y

  # Returning the final output
  return ans[n]


def fibo(n):
  ans = [-1]*(n+1)

  # Initializing with -1
  #for (i = 0; i <= n; i++) {
  for i in range(0,n+1):
    ans[i] = -1
    
  return fibo_helper(n, ans)


# Code
n = 5

# Function Call
print(fibo(n))
# contributed by akashish__
C#
using System;

public class GFG {

    // Helper Function
    public static int fibo_helper(int n, int[] ans)
    {

        // Base case
        if (n <= 1) {
            return n;
        }

        // To check if output already exists
        if (ans[n] != -1) {
            return ans[n];
        }

        // Calculate output
        int x = fibo_helper(n - 1, ans);
        int y = fibo_helper(n - 2, ans);

        // Saving the output for future use
        ans[n] = x + y;

        // Returning the final output
        return ans[n];
    }

    public static int fibo(int n)
    {
        int[] ans = new int[n + 1];

        // Initializing with -1
        for (int i = 0; i <= n; i++) {
            ans[i] = -1;
        }
        return fibo_helper(n, ans);
    }

    static public void Main()
    {

        // Code

        int n = 5;

        // Function Call
        Console.WriteLine(fibo(n));
    }
}
// contributed by akashish__
JavaScript
<script>

// Javascript code for the above approach:

// Helper Function
function fibo_helper(n, ans) {
    // Base case
    if (n <= 1) {
        return n;
    }

    // To check if output already exists
    if (ans[n] != -1) {
        return ans[n];
    }

    // Calculate output
    let x = fibo_helper(n - 1, ans);
    let y = fibo_helper(n - 2, ans);

    // Saving the output for future use
    ans[n] = x + y;

    // Returning the final output
    return ans[n];
}

function fibo(n) {
    let ans = [];

    // Initializing with -1
    for (let i = 0; i <= n; i++) {
        ans.push(-1);
    }
    return fibo_helper(n, ans);
}

// Drivers code
let n = 5;

// Function Call
console.log(fibo(n));

// contributed by akashish__

</script>

Output
5

Complexity analysis:

  • Time complexity: O(n)
  • Auxiliary Space: O(n)

Optimized approach: Following a bottom-up approach to reach the desired index. This approach of converting recursion into iteration is known as Dynamic programming(DP).

Observations:

  • Finally, what we do is recursively call each response index field and calculate its value using previously saved outputs. 
  • Recursive calls terminate via the base case, which means we are already aware of the answers which should be stored in the base case indexes. 
  • In the case of Fibonacci numbers, these indices are 0 and 1 as f(ib0) = 0 and f(ib1) = 1. So we can directly assign these two values ​​into our answer array and then use them to calculate f(ib2), which is f(ib1) + f(ib0), and so on for each subsequent index. 
  • This can easily be done iteratively by running a loop from i = (2 to n). Finally, we get our answer at the 5th index of the array because we already know that the ith index contains the answer to the ith value.
  • Simply, we first try to find out the dependence of the current value on previous values ​​and then use them to calculate our new value. Now, we are looking for those values which do not depend on other values, which means they are independent(base case values, since these, are the smallest problems
    which we are already aware of).

Below is the implementation for the above approach:

C++
// C++ code for the above approach:
#include <iostream>
using namespace std;

// Function for calculating the nth
// Fibonacci number
int fibo(int n)
{
    int* ans = new int[n + 1];

    // Storing the independent values in the
    // answer array
    ans[0] = 0;
    ans[1] = 1;

    // Using the bottom-up approach
    for (int i = 2; i <= n; i++) {
        ans[i] = ans[i - 1] + ans[i - 2];
    }

    // Returning the final index
    return ans[n];
}

// Drivers code
int main()
{
    int n = 5;

    // Function Call
    cout << fibo(n);
    return 0;
}
Java
// Java code for the above approach:
import java.io.*;

class GFG {
    // Function for calculating the nth
    // Fibonacci number
    public static int fibo(int n)
    {
        int ans[] = new int[n + 1];

        // Storing the independent values in the
        // answer array
        ans[0] = 0;
        ans[1] = 1;

        // Using the bottom-up approach
        for (int i = 2; i <= n; i++) {
            ans[i] = ans[i - 1] + ans[i - 2];
        }

        // Returning the final index
        return ans[n];
    }

    // Driver Code
    public static void main(String[] args)
    {
        int n = 5;

        // Function Call
        System.out.print(fibo(n));
    }
}

// This code is contributed by Rohit Pradhan
Python
# Python3 code for the above approach:

# Function for calculating the nth
# Fibonacci number
def fibo(n):
  ans = [None] * (n + 1)

  # Storing the independent values in the
  # answer array
  ans[0] = 0
  ans[1] = 1

  # Using the bottom-up approach
  for i in range(2,n+1):
    ans[i] = ans[i - 1] + ans[i - 2]

  # Returning the final index
  return ans[n]

# Drivers code
n = 5

# Function Call
print(fibo(n))
#contributed by akashish__
C#
// C# code for the above approach:

using System;

public class GFG {

    // Function for calculating the nth
    // Fibonacci number
    public static int fibo(int n)
    {
        int[] ans = new int[n + 1];

        // Storing the independent values in the
        // answer array
        ans[0] = 0;
        ans[1] = 1;

        // Using the bottom-up approach
        for (int i = 2; i <= n; i++) {
            ans[i] = ans[i - 1] + ans[i - 2];
        }

        // Returning the final index
        return ans[n];
    }

    static public void Main()
    {

        // Code
        int n = 5;

        // Function Call
        Console.Write(fibo(n));
    }
}

// This code is contributed by lokeshmvs21.
JavaScript
<script>
// Javascript code for the above approach:
// Function for calculating the nth
// Fibonacci number
function fibo(n)
{
    let ans = new Array(n + 1).fill(0);

    // Storing the independent values in the
    // answer array
    ans[0] = 0;
    ans[1] = 1;

    // Using the bottom-up approach
    for (let i = 2; i <= n; i++) {
        ans[i] = ans[i - 1] + ans[i - 2];
    }

    // Returning the final index
    return ans[n];
}

// Drivers code
let n = 5;

// Function Call
console.log(fibo(n));

// contributed by akashish__
</script>

Output
5

Complexity analysis: 

  • Time complexity: O(n)
  • Auxiliary Space: O(n)

Optimization of above method

  • in above code we can see that the current state of any fibonacci number depend only on prev two number
  • so using this observation we can conclude that we did not need to store the whole table of size n but instead of that we can only store the prev two values
  • so this way we can optimize the space complexity in the above code O(n) to O(1)
C++
// C++ code for the above approach:
#include <iostream>
using namespace std;

// Function for calculating the nth Fibonacci number
int fibo(int n)
{
    int prevPrev, prev, curr;

    // Storing the independent values
    prevPrev = 0;
    prev = 1;
    curr = 1;

    // Using the bottom-up approach
    for (int i = 2; i <= n; i++) {
        curr = prev + prevPrev;
        prevPrev = prev;
        prev = curr;
    }

    // Returning the final answer
    return curr;
}

// Drivers code
int main()
{
    int n = 5;

    // Function Call
    cout << fibo(n);
    return 0;
}
Java
// C++ code for the above approach
public class Main {
    // Function for calculating the nth Fibonacci number
    public static int fibo(int n)
    {
        int prevPrev, prev, curr;

        // Storing the independent values
        prevPrev = 0;
        prev = 1;
        curr = 1;

        // Using the bottom-up approach
        for (int i = 2; i <= n; i++) {
            curr = prev + prevPrev;
            prevPrev = prev;
            prev = curr;
        }

        // Returning the final answer
        return curr;
    }

    // Drivers code
    public static void main(String[] args)
    {
        int n = 5;

        // Function Call
        System.out.println(fibo(n));
    }
}
Python
# Python code for the above approach

# Function for calculating the nth Fibonacci number
def fibo(n):
    prevPrev, prev, curr = 0, 1, 1
    # Using the bottom-up approach
    for i in range(2, n+1):
        curr = prev + prevPrev
        prevPrev = prev
        prev = curr
    # Returning the final answer
    return curr

# Drivers code
n = 5
# Function Call
print(fibo(n))
C#
using System;

public class Program
{
    // Function for calculating the nth Fibonacci number
    static int Fibo(int n)
    {
        int prevPrev, prev, curr;

        // Storing the independent values
        prevPrev = 0;
        prev = 1;
        curr = 1;

        // Using the bottom-up approach
        for (int i = 2; i <= n; i++)
        {
            curr = prev + prevPrev;
            prevPrev = prev;
            prev = curr;
        }

        // Returning the final answer
        return curr;
    }

    // Drivers code
    static void Main()
    {
        int n = 5;

        // Function Call
        Console.WriteLine(Fibo(n));
    }
}
// This code is contributed by divyansh2212
JavaScript
// Function for calculating the nth Fibonacci number
function fibo(n) {
let prevPrev = 0, prev = 1, curr = 1;
// Using the bottom-up approach
for (let i = 2; i <= n; i++) {
curr = prev + prevPrev;
prevPrev = prev;
prev = curr;
}
// Returning the final answer
return curr;
}

// Drivers code
let n = 5;
// Function Call
console.log(fibo(n));

Output
5

Common Algorithms that Use DP:

Advantages of Dynamic Programming (DP)

Dynamic programming has a wide range of advantages, including:

  • Avoids recomputing the same subproblems multiple times, leading to significant time savings.
  • Ensures that the optimal solution is found by considering all possible combinations.
  • Breaks down complex problems into smaller, more manageable subproblems.

Applications of Dynamic Programming (DP)

Dynamic programming has a wide range of applications, including:

  • Optimization: Knapsack problem, shortest path problem, maximum subarray problem
  • Computer Science: Longest common subsequence, edit distance, string matching
  • Operations Research: Inventory management, scheduling, resource allocation

Now, let’s explore a comprehensive roadmap to mastering Dynamic Programming.

Characteristics of Dynamic Programming Algorithm

  • For any problem, if there is a simple recursive solution and a recursion tree has same recursive calls multiple times (or overlapping subproblems), we use DP.
  •  The solutions to the subproblems are stored in a table or array in both bottom-up manner (tabulation) or memoization (top-down) to avoid redundant computation.
  • The solution to the problem can be constructed from the solutions to the subproblems.
  • Dynamic programming can be implemented using a recursive algorithm, where the solutions to subproblems are found recursively, or using an iterative algorithm, where the solutions are found by working through the subproblems in a specific order.

Dynamic Programming (DP) Tutorial with Problems – FAQs

Is dynamic programming just recursion?

Dynamic programming and recursion are things completely different. While dynamic programming can use recursion techniques, recursion itself doesn’t have anything similar to dynamic programming. . Dynamic programming involves breaking down a problem into smaller subproblems, storing the solutions to these subproblems to avoid redundant computation, and using these solutions to construct the overall solution. Recursion, on the other hand, is a technique for solving problems by breaking them down into smaller subproblems and solving them recursively.

How does dynamic programming works?

Dynamic Programming (DP) is a technique that solves some particular type of problems in Polynomial Time. Dynamic Programming solutions are faster than the exponential brute method and can be easily proved their correctness. Dynamic programming works by breaking down a problem into smaller subproblems, solving each subproblem independently, and using the solutions to these subproblems to construct the overall solution. The solutions to the subproblems are stored in a table or array (memoization) or in a bottom-up manner (tabulation) to avoid redundant computation.

How greedy algorithms are similar to Dynamic programming?

Greedy Algorithms are similar to dynamic programming in the sense that they are both tools for optimization. Both dynamic programming and greedy algorithms are used for optimization problems. However, while dynamic programming breaks down a problem into smaller subproblems and solves them independently, greedy algorithms make a locally optimal choice at each step with the hope of finding a globally optimal solution.

What are the basics of Dynamic programming?

You can solve subproblems faster by using dynamic programming, which is nothing more than recursion and memoization, thereby reducing the complexity of your code and making it faster. Following are the basic points:

  • Breaking down a problem into smaller subproblems.
  • Solving each subproblem independently.
  • Storing the solutions to subproblems to avoid redundant computation.
  • Using the solutions to the subproblems to construct the overall solution.
  • Using the principle of optimality to ensure that the solution is optimal.

What are the advantages of Dynamic programming?

Dynamic programming has the advantage of being able to find both a local and a global optimal solution. Additionally, practical experience can be exploited to benefit from dynamic programming’s better efficiency. However, there isn’t a single, accepted paradigm for dynamic programming, and other conditions could show up as the problem is being solved. Dynamic programming algorithms are guaranteed to find the optimal solution among a set of possible solutions, provided that the problem satisfies the principle of optimality. The solutions to subproblems can be stored in a table, which can be reused for similar problems. Dynamic programming can be applied to a wide range of problems, including optimization, sequence alignment, and resource allocation.



Previous Article
Next Article

Similar Reads

Dynamic Programming in Game Theory for Competitive Programming
In the fast-paced world of competitive programming, mastering dynamic programming in game theory is the key to solving complex strategic challenges. This article explores how dynamic programming in game theory can enhance your problem-solving skills and strategic insights, giving you a competitive edge. Whether you're a seasoned coder or a newcomer
15+ min read
Introduction to Dynamic Programming on Trees
Dynamic Programming(DP) is a technique to solve problems by breaking them down into overlapping sub-problems which follows the optimal substructure. There are various problems using DP like subset sum, knapsack, coin change etc. DP can also be applied on trees to solve some specific problems.Pre-requisite: DFSGiven a tree with N nodes and N-1 edges
10 min read
Introduction and Dynamic Programming solution to compute nCr%p
Given three numbers n, r and p, compute value of nCr mod p. Example: Input: n = 10, r = 2, p = 13 Output: 6 Explanation: 10C2 is 45 and 45 % 13 is 6.We strongly recommend that you click here and practice it, before moving on to the solution.METHOD 1: (Using Dynamic Programming) A Simple Solution is to first compute nCr, then compute nCr % p. This s
15+ min read
Print equal sum sets of Array (Partition Problem) using Dynamic Programming
Given an array arr[]. Determine whether it is possible to split the array into two sets such that the sum of elements in both sets is equal. If it is possible, then print both sets. If it is not possible then output -1. Examples : Input : arr = {5, 5, 1, 11} Output : Set 1 = {5, 5, 1}, Set 2 = {11} Sum of both the sets is 11 and equal. Input : arr
13 min read
Dynamic Programming vs Divide-and-Conquer
In this article I’m trying to explain the difference/similarities between dynamic programming and divide and conquer approaches based on two examples: binary search and minimum edit distance (Levenshtein distance).The ProblemWhen I started to learn algorithms it was hard for me to understand the main idea of dynamic programming (DP) and how it is d
12 min read
Maximum sum of nodes in Binary tree such that no two are adjacent | Dynamic Programming
Given a N-ary tree with a value associated with each node, the task is to choose a subset of these nodes such that sum of chosen nodes is maximum under a constraint that no two chosen node in subset should be directly connected that is, if we have taken a node in our sum then we can’t take its any children in consideration and vice versa.Examples:
9 min read
Distinct palindromic sub-strings of the given string using Dynamic Programming
Given a string str of lowercase alphabets, the task is to find all distinct palindromic sub-strings of the given string. Examples: Input: str = "abaaa" Output: 5 Palindromic sub-strings are "a", "aa", "aaa", "aba" and "b" Input: str = "abcd" Output: 4 Approach: The solution to this problem has been discussed here using Manacher’s algorithm. However
8 min read
Longest path in a directed Acyclic graph | Dynamic Programming
Given a directed graph G with N vertices and M edges. The task is to find the length of the longest directed path in Graph.Note: Length of a directed path is the number of edges in it. Examples: Input: N = 4, M = 5 Output: 3 The directed path 1->3->2->4 Input: N = 5, M = 8 Output: 3 Simple Approach: A naive approach is to calculate the len
8 min read
Count of numbers from range [L, R] whose sum of digits is Y using Dynamic Programming
Pre-requisites: Recursion, Dynamic Programming, Digit DP Given an integer Y and a range [L, R], the task is to find the count of all numbers from the given range whose sum of digits is equal to Y.Examples: Input: L = 0, R = 11, Y = 2 Output: 2 2 -> 2 11 -> 1 + 1 = 2Input: L = 500, R = 1000, Y = 6 Output: 3 Constraints: 0 <= L <= R <=
10 min read
Minimum time required to rot all oranges | Dynamic Programming
Given a matrix of dimension m * n where each cell in the matrix can have values 0, 1, or 2 which has the following meaning: 0: Empty cell 1: Cells have fresh oranges 2: Cells have rotten oranges So the task is to determine what is the minimum time required so that all the oranges become rotten. A rotten orange at index [i, j] can rot other fresh or
15 min read
Reliability Design Problem in Dynamic Programming
The reliability design problem is the designing of a system composed of several devices connected in series or parallel. Reliability means the probability to get the success of the device. Let's say, we have to set up a system consisting of D1, D2, D3, ............, and Dn devices, each device has some costs C1, C2, C3, ........, Cn. Each device ha
6 min read
Convert N to M with given operations using dynamic programming
Given two integers N and M and the task is to convert N to M with the following operations: Multiply N by 2 i.e. N = N * 2.Subtract 1 from N i.e. N = N - 1. Examples: Input: N = 4, M = 6 Output: 2 Perform operation 2: N = N - 1 = 4 - 1 = 3 Perform operation 1: N = N * 2 = 3 * 2 = 6 Input: N = 10, M = 1 Output: 9 Approach: Create an array dp[] of si
7 min read
Longest subsequence with a given OR value : Dynamic Programming Approach
Given an array arr[], the task is to find the longest subsequence with a given OR value M. If there is no such sub-sequence then print 0.Examples: Input: arr[] = {3, 7, 2, 3}, M = 3 Output: 3 {3, 2, 3} is the required subsequence 3 | 2 | 3 = 3Input: arr[] = {2, 2}, M = 3 Output: 0 Approach: A simple solution is to generate all the possible sub-sequ
6 min read
Python | Implementing Dynamic programming using Dictionary
Dynamic Programming is one way which can be used as an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of subproblems so that we do not have to re-compute them when needed later. This simple opti
3 min read
Ackermann's function using Dynamic programming
Given two non-zero integers M and N, the problem is to compute the result of the Ackermann function based on some particular equations. Ackermann function is defined as: Examples: Input: M = 2, N = 2Output: 7 Input: M = 2, N = 7Output: 6141004759 The approach for Ackermann function described in this article, takes a very huge amount of time to comp
13 min read
Count ways to select N pairs of candies of distinct colors (Dynamic Programming + Bitmasking)
Given an integer N representing the number of red and blue candies and a matrix mat[][] of size N * N, where mat[i][j] = 1 represents the existence of a pair between ith red candy and jth blue candy, the task is to find the count of ways to select N pairs of candies such that each pair contains distinct candies of different colors. Examples: Input:
13 min read
Hamiltonian Path ( Using Dynamic Programming )
Given an adjacency matrix adj[][] of an undirected graph consisting of N vertices, the task is to find whether the graph contains a Hamiltonian Path or not. If found to be true, then print "Yes". Otherwise, print "No". A Hamiltonian path is defined as the path in a directed or undirected graph which visits each and every vertex of the graph exactly
10 min read
Minimum moves to make M and N equal by repeated addition of divisors except 1 using Dynamic Programming
Given two integers N and M, the task is to calculate the minimum number of moves to change N to M, where In one move it is allowed to add any divisor of the current value of N to N itself except 1. Print "-1" if it is not possible. Example: Input: N = 4, M = 24Output: 5Explanation: The given value of N can be converted into M using the following st
8 min read
Dynamic Programming meaning in DSA
Dynamic Programming is defined as an algorithmic technique that is used to solve problems by breaking them into smaller subproblems and avoiding repeated calculation of overlapping subproblems and using the property that the solution of the problem depends on the optimal solution of the subproblems Properties of Dynamic Programming:Optimal Substruc
2 min read
Knuth's Optimization in Dynamic Programming
Knuth's optimization is a very powerful tool in dynamic programming, that can be used to reduce the time complexity of the solutions primarily from O(N3) to O(N2). Normally, it is used for problems that can be solved using range DP, assuming certain conditions are satisfied. In this article, we will first explore the optimization itself, and then s
15+ min read
Divide and Conquer Optimization in Dynamic Programming
Dynamic programming (DP) is arguably the most important tool in a competitive programmer's repertoire. There are several optimizations in DP that reduce the time complexity of standard DP procedures by a linear factor or more, such as Knuth's optimization, Divide and Conquer optimization, the Convex Hull Trick, etc. They are, of paramount importanc
15+ min read
Comparison among Greedy, Divide and Conquer and Dynamic Programming algorithm
Greedy algorithm, divide and conquer algorithm, and dynamic programming algorithm are three common algorithmic paradigms used to solve problems. Here's a comparison among these algorithms: Approach:Greedy algorithm: Makes locally optimal choices at each step with the hope of finding a global optimum.Divide and conquer algorithm: Breaks down a probl
4 min read
Expected number of moves to reach the end of a board | Dynamic programming
Given a linear board of length N numbered from 1 to N, the task is to find the expected number of moves required to reach the Nth cell of the board, if we start at cell numbered 1 and at each step we roll a cubical dice to decide the next move. Also, we cannot go outside the bounds of the board. Note that the expected number of moves can be fractio
11 min read
Double Knapsack | Dynamic Programming
Given an array 'arr' containing the weight of 'N' distinct items, and two knapsacks that can withstand 'W1' and 'W2' weights, the task is to find the sum of the largest subset of the array 'arr', that can be fit in the two knapsacks. It's not allowed to break any items in two, i.e an item should be put in one of the bags as a whole.Examples: Input
15+ min read
Optimal Strategy for the Divisor game using Dynamic Programming
Given an integer N and two players, A and B are playing a game. On each player’s turn, that player makes a move by subtracting a divisor of current N (which is less than N) from current N, thus forming a new N for the next turn. The player who does not have any divisor left to subtract loses the game. The task is to tell which player wins the game
9 min read
Optimal Substructure Property in Dynamic Programming | DP-2
As we discussed in Set 1, the following are the two main properties of a problem that suggest that the given problem can be solved using Dynamic programming: 1) Overlapping Subproblems 2) Optimal Substructure We have already discussed the Overlapping Subproblem property in Set 1. Let us discuss the Optimal Substructure property here. In Dynamic pro
9 min read
Longest Palindromic Substring using Dynamic Programming
Given a string, find the longest substring which is a palindrome. Examples: Input: Given string :"forgeeksskeegfor", Output: "geeksskeeg".Input: Given string :"Geeks", Output: "ee". Recommended PracticeLongest Palindrome in a StringTry It!BRUTE APPROACH: (TABULATION METHOD)Intuition: We create a 2-D array to fill the array with appropriate steps.We
15+ min read
Largest Independent Set Problem using Dynamic Programming
Given a Binary Tree, find size of the Largest Independent Set(LIS) in it. A subset of all tree nodes is an independent set if there is no edge between any two nodes of the subset. For example, consider the following binary tree. The largest independent set(LIS) is {10, 40, 60, 70, 80} and size of the LIS is 5. Recommended PracticeLargest Independen
15+ min read
Travelling Salesman Problem using Dynamic Programming
Travelling Salesman Problem (TSP): Given a set of cities and the distance between every pair of cities, the problem is to find the shortest possible route that visits every city exactly once and returns to the starting point. Note the difference between Hamiltonian Cycle and TSP. The Hamiltonian cycle problem is to find if there exists a tour that
13 min read
Vertex Cover Problem (Dynamic Programming Solution for Tree)
A vertex cover of an undirected graph is a subset of its vertices such that for every edge (u, v) of the graph, either ‘u’ or ‘v’ is in vertex cover. Although the name is Vertex Cover, the set covers all edges of the given graph. The problem to find minimum size vertex cover of a graph is NP complete. But it can be solved in polynomial time for tre
15+ min read
Practice Tags :
three90RightbarBannerImg