Data Structures & Algorithms March 31, 2026 13 min read 3 views

Dynamic Programming with Python: Practical Examples Guide

Master dynamic programming through practical Python examples. This comprehensive guide walks you from basic recursion to optimized DP solutions, with real code examples for Fibonacci, knapsack, and grid problems. Perfect for coding interview preparation.

Table of Contents

Dynamic Programming with Python: A Practical Guide

Dynamic programming is one of the most powerful techniques in a programmer’s toolkit—and one of the most intimidating for beginners. When I first encountered dynamic programming with python examples, I struggled to see how breaking problems into overlapping subproblems could lead to elegant, efficient solutions.

But here’s the truth: once you understand the core patterns, dynamic programming becomes a systematic, almost mechanical process. In this guide, we’ll explore python dynamic programming through concrete examples that you can run, modify, and debug yourself. Whether you’re preparing for technical interviews or building your algorithmic intuition, these python examples will transform DP from a mysterious concept into a practical tool.

If you’re completely new to DP, I recommend starting with our Introduction to Dynamic Programming: A Beginner’s Guide first. For everyone else, let’s dive into the code.

What Makes Dynamic Programming Different?

Dynamic programming solves problems by breaking them into smaller subproblems, but with a critical twist: it remembers past results to avoid redundant calculations. This memoization or tabulation approach transforms exponential-time recursive solutions into efficient polynomial-time algorithms.

Consider the difference:

 

Python

# Without DP: O(2^n) - Avoid this!
def fibonacci_naive(n):
    if n <= 1:
        return n
    return fibonacci_naive(n-1) + fibonacci_naive(n-2)

# With DP memoization: O(n)
def fibonacci_memo(n, memo={}):
    if n in memo:
        return memo[n]
    if n <= 1:
        return n
    memo[n] = fibonacci_memo(n-1, memo) + fibonacci_memo(n-2, memo)
    return memo[n]

 

The difference becomes dramatic as n grows. For n=40, the naive version makes over 300 million recursive calls, while the memoized version makes just 79. This is the power of dynamic programming with python examples in action.

For a deeper understanding of algorithmic efficiency, check out Understanding Time Complexity in Python.

The Two Flavors of Dynamic Programming

Before diving into examples, understand that python dynamic programming typically follows one of two approaches:

Top-Down (Memoization)

Start with the original problem and recursively break it down, caching results as you go. This approach often feels more intuitive because it mirrors how we naturally think about problems.

Bottom-Up (Tabulation)

Start with the smallest subproblems and build up to the original. This approach is usually more space-efficient and avoids recursion depth limits.

Let’s see both in action:

Python

# Top-Down Memoization
def climb_stairs_memo(n, memo={}):
    if n in memo:
        return memo[n]
    if n <= 2:
        return n
    memo[n] = climb_stairs_memo(n-1, memo) + climb_stairs_memo(n-2, memo)
    return memo[n]

# Bottom-Up Tabulation  
def climb_stairs_tab(n):
    if n <= 2:
        return n
    dp = [0] * (n + 1)
    dp[1], dp[2] = 1, 2
    for i in range(3, n + 1):
        dp[i] = dp[i-1] + dp[i-2]
    return dp[n]

 

Both solve the same problem—counting distinct ways to climb n stairs taking 1 or 2 steps at a time—but with different implementation strategies.

Essential Dynamic Programming Patterns

Through years of solving algorithmic problems, I’ve noticed that most DP problems fall into distinct patterns. Recognizing these patterns is key to mastering dynamic programming with python examples.

1. Fibonacci Style (Linear DP)

These problems involve making decisions that affect one state variable. Classic examples include climbing stairs, house robber, and decoding ways.

 

Python

def house_robber(nums):
    """
    Problem: Rob houses without alerting police (no adjacent houses)
    Pattern: At each house, decide to rob or skip
    """
    if not nums:
        return 0
    if len(nums) <= 2:
        return max(nums)

    dp = [0] * len(nums)
    dp[0] = nums[0]
    dp[1] = max(nums[0], nums[1])

    for i in range(2, len(nums)):
        dp[i] = max(dp[i-1], dp[i-2] + nums[i])

    return dp[-1]

# Space-optimized version
def house_robber_optimized(nums):
    prev2, prev1 = 0, 0
    for num in nums:
        current = max(prev1, prev2 + num)
        prev2 = prev1
        prev1 = current
    return prev1

 

The space optimization here demonstrates a crucial insight: we only need the last two values. This kind of optimization is covered in our Algorithm Optimization Mistakes Beginners Must Avoid.

2. Grid Problems (2D DP)

When your problem involves navigating a grid or matrix, you’ll often need 2D DP tables.

 

Python

def unique_paths(m, n):
    """
    Problem: Count paths from top-left to bottom-right (only down/right moves)
    Pattern: Paths to cell = paths from above + paths from left
    """
    dp = [[1] * n for _ in range(m)]

    for i in range(1, m):
        for j in range(1, n):
            dp[i][j] = dp[i-1][j] + dp[i][j-1]

    return dp[m-1][n-1]

# With obstacles
def unique_paths_with_obstacles(obstacleGrid):
    if not obstacleGrid or obstacleGrid[0][0] == 1:
        return 0

    m, n = len(obstacleGrid), len(obstacleGrid[0])
    dp = [[0] * n for _ in range(m)]
    dp[0][0] = 1

    for i in range(m):
        for j in range(n):
            if obstacleGrid[i][j] == 1:
                dp[i][j] = 0
                continue
            if i > 0:
                dp[i][j] += dp[i-1][j]
            if j > 0:
                dp[i][j] += dp[i][j-1]

    return dp[m-1][n-1]

 

For more practice with 2D problems, our Two Pointer Technique | Master Array Problems in 8 Steps offers complementary strategies.

3. Knapsack Problems (Decision DP)

The knapsack family is where dynamic programming with python examples truly shines. These problems involve selecting items with given weights and values.

Python

def knapsack_01(weights, values, capacity):
    """
    0/1 Knapsack: Each item can be taken at most once
    """
    n = len(weights)
    dp = [[0] * (capacity + 1) for _ in range(n + 1)]

    for i in range(1, n + 1):
        for w in range(capacity + 1):
            if weights[i-1] <= w:
                dp[i][w] = max(
                    values[i-1] + dp[i-1][w - weights[i-1]],
                    dp[i-1][w]
                )
            else:
                dp[i][w] = dp[i-1][w]

    return dp[n][capacity]

# Space-optimized version
def knapsack_optimized(weights, values, capacity):
    n = len(weights)
    dp = [0] * (capacity + 1)

    for i in range(n):
        for w in range(capacity, weights[i] - 1, -1):
            dp[w] = max(dp[w], values[i] + dp[w - weights[i]])

    return dp[capacity]

 

The space optimization here reverses the inner loop to prevent reusing items. This subtle but crucial detail often appears in Algorithm Optimization Mistakes Beginners Must Avoid.

4. Subsequence Problems (String DP)

String-based DP problems often involve building matrices to compare prefixes.

 

Python

def longest_common_subsequence(text1, text2):
    """
    Find the length of the longest common subsequence
    """
    m, n = len(text1), len(text2)
    dp = [[0] * (n + 1) for _ in range(m + 1)]

    for i in range(1, m + 1):
        for j in range(1, n + 1):
            if text1[i-1] == text2[j-1]:
                dp[i][j] = 1 + dp[i-1][j-1]
            else:
                dp[i][j] = max(dp[i-1][j], dp[i][j-1])

    return dp[m][n]

# Reconstruct the actual subsequence
def lcs_with_path(text1, text2):
    m, n = len(text1), len(text2)
    dp = [[0] * (n + 1) for _ in range(m + 1)]

    # Fill DP table
    for i in range(1, m + 1):
        for j in range(1, n + 1):
            if text1[i-1] == text2[j-1]:
                dp[i][j] = 1 + dp[i-1][j-1]
            else:
                dp[i][j] = max(dp[i-1][j], dp[i][j-1])

    # Reconstruct LCS
    lcs = []
    i, j = m, n
    while i > 0 and j > 0:
        if text1[i-1] == text2[j-1]:
            lcs.append(text1[i-1])
            i -= 1
            j -= 1
        elif dp[i-1][j] > dp[i][j-1]:
            i -= 1
        else:
            j -= 1

    return ''.join(reversed(lcs))

 

For more string algorithm practice, see our Binary Search Explained: Algorithm, Examples, & Edge Cases guide.

Real-World Optimization: From Brute Force to Dynamic Programming

One of the best ways to understand dynamic programming with python examples is to watch the optimization process unfold. Let’s solve a classic problem step by step.

Problem: Given an array of positive integers, find the maximum sum of non-adjacent elements.

Step 1: Brute Force Recursion

 

Python

def max_sum_naive(arr, i=0):
    """Try all combinations - O(2^n)"""
    if i >= len(arr):
        return 0

    # Either take current element (skip next) or skip current
    take = arr[i] + max_sum_naive(arr, i + 2)
    skip = max_sum_naive(arr, i + 1)

    return max(take, skip)

 

This works but becomes impossibly slow for arrays over size 30. The problem? Massive overlapping subproblems.

Step 2: Top-Down with Memoization

 

Python

def max_sum_memo(arr, i=0, memo=None):
    """Cache results - O(n) time and space"""
    if memo is None:
        memo = {}
    if i in memo:
        return memo[i]
    if i >= len(arr):
        return 0

    take = arr[i] + max_sum_memo(arr, i + 2, memo)
    skip = max_sum_memo(arr, i + 1, memo)

    memo[i] = max(take, skip)
    return memo[i]

 

Now we’re in O(n) territory. Each index is computed once and cached.

Step 3: Bottom-Up Tabulation

 

Python

def max_sum_tab(arr):
    """Build solution from ground up - O(n) time and space"""
    if not arr:
        return 0
    if len(arr) <= 2:
        return max(arr)

    dp = [0] * len(arr)
    dp[0] = arr[0]
    dp[1] = max(arr[0], arr[1])

    for i in range(2, len(arr)):
        dp[i] = max(dp[i-1], dp[i-2] + arr[i])

    return dp[-1]

 

Step 4: Space-Optimized

 

Python

def max_sum_optimized(arr):
    """Constant space - O(n) time, O(1) space"""
    if not arr:
        return 0

    prev2, prev1 = 0, arr[0]

    for i in range(1, len(arr)):
        current = max(prev1, prev2 + arr[i])
        prev2 = prev1
        prev1 = current

    return prev1

 

This progression from O(2^n) to O(n) time and O(1) space exemplifies the power of python dynamic programming. Each optimization step is covered in depth in our Brute Force vs Optimal Solutions | Algorithm Optimization Guide.

Common Dynamic Programming Interview Problems

Let’s tackle some frequently asked problems with dynamic programming with python examples.

Problem 1: Coin Change

 

Python

def coin_change(coins, amount):
    """
    Find minimum coins needed to make amount
    """
    dp = [float('inf')] * (amount + 1)
    dp[0] = 0

    for i in range(1, amount + 1):
        for coin in coins:
            if coin <= i:
                dp[i] = min(dp[i], dp[i - coin] + 1)

    return dp[amount] if dp[amount] != float('inf') else -1

# Count number of ways
def coin_change_ways(coins, amount):
    dp = [0] * (amount + 1)
    dp[0] = 1

    for coin in coins:
        for i in range(coin, amount + 1):
            dp[i] += dp[i - coin]

    return dp[amount]

 

Problem 2: Edit Distance

 

Python

def edit_distance(word1, word2):
    """
    Minimum operations to convert word1 to word2
    Operations: insert, delete, replace
    """
    m, n = len(word1), len(word2)
    dp = [[0] * (n + 1) for _ in range(m + 1)]

    # Base cases
    for i in range(m + 1):
        dp[i][0] = i
    for j in range(n + 1):
        dp[0][j] = j

    # Fill DP table
    for i in range(1, m + 1):
        for j in range(1, n + 1):
            if word1[i-1] == word2[j-1]:
                dp[i][j] = dp[i-1][j-1]
            else:
                dp[i][j] = 1 + min(
                    dp[i-1][j],    # delete
                    dp[i][j-1],    # insert
                    dp[i-1][j-1]   # replace
                )

    return dp[m][n]

 

Problem 3: Palindrome Partitioning

 

Python

def min_cut_palindrome(s):
    """
    Minimum cuts needed for all substrings to be palindromes
    """
    n = len(s)

    # Precompute palindrome table
    is_pal = [[False] * n for _ in range(n)]

    # Single letters are palindromes
    for i in range(n):
        is_pal[i][i] = True

    # Check longer substrings
    for length in range(2, n + 1):
        for i in range(n - length + 1):
            j = i + length - 1
            if length == 2:
                is_pal[i][j] = (s[i] == s[j])
            else:
                is_pal[i][j] = (s[i] == s[j] and is_pal[i+1][j-1])

    # DP for minimum cuts
    cuts = [float('inf')] * n
    for i in range(n):
        if is_pal[0][i]:
            cuts[i] = 0
        else:
            for j in range(i):
                if is_pal[j+1][i]:
                    cuts[i] = min(cuts[i], cuts[j] + 1)

    return cuts[n-1]

 

For more interview-focused problems, check How to Approach Hard LeetCode Problems | A Strategic Framework.

Debugging Dynamic Programming Solutions

Even experienced developers make mistakes in python dynamic programming. Here are common pitfalls and how to catch them:

1. Off-by-One Errors

 

Python

# Wrong - missing last element
def wrong_fib(n):
    dp = [0] * n  # Should be n+1
    dp[0], dp[1] = 0, 1
    for i in range(2, n):
        dp[i] = dp[i-1] + dp[i-2]
    return dp[n-1]  # Off-by-one

# Right
def correct_fib(n):
    dp = [0] * (n + 1)
    dp[0], dp[1] = 0, 1
    for i in range(2, n + 1):
        dp[i] = dp[i-1] + dp[i-2]
    return dp[n]

 

2. Incorrect Base Cases

 

Python

def climb_stairs_base_case(n):
    # Wrong: doesn't handle n=0
    if n == 1:
        return 1
    dp = [0] * (n + 1)
    dp[1] = 1
    dp[2] = 2  # IndexError for n=1

    # Right:
def climb_stairs_correct(n):
    if n <= 2:
        return max(1, n)  # Handles n=0,1,2
    dp = [0] * (n + 1)
    dp[1], dp[2] = 1, 2
    for i in range(3, n + 1):
        dp[i] = dp[i-1] + dp[i-2]
    return dp[n]

 

For more debugging strategies, our Complete Python Debugging and Error Handling Series is invaluable.

Optimization Techniques Beyond Basic DP

Once you’ve mastered basic dynamic programming with python examples, consider these advanced optimizations:

State Compression

When you only need previous rows, reduce space complexity:

 

Python

def min_path_sum(grid):
    """Minimum path sum from top-left to bottom-right"""
    m, n = len(grid), len(grid[0])

    # Use single array instead of 2D
    dp = [float('inf')] * n
    dp[0] = 0

    for i in range(m):
        dp[0] += grid[i][0]
        for j in range(1, n):
            dp[j] = min(dp[j], dp[j-1]) + grid[i][j]

    return dp[n-1]

 

Prefix Sums for Range Queries

 

Python

def range_sum_query(nums):
    """Precompute prefix sums for O(1) range sum queries"""
    prefix = [0]
    for num in nums:
        prefix.append(prefix[-1] + num)

    def query(left, right):
        return prefix[right + 1] - prefix[left]

    return query

 

These techniques are explored further in Mastering Optimization Techniques for Algorithmic Problems.

Building Your Dynamic Programming Intuition

Mastering python dynamic programming isn’t about memorizing solutions—it’s about pattern recognition. Here’s my recommended practice approach:

  1. Start with brute force - Always implement the naive recursive solution first
  2. Identify overlapping subproblems - Look for repeated calculations
  3. Add memoization - Cache results to eliminate redundancy
  4. Convert to tabulation - Build bottom-up solutions
  5. Optimize space - Reduce memory usage where possible
    This systematic approach, detailed in Problem-Solving Strategies for Coding Interviews, will serve you well.

Frequently Asked Questions

Q: When should I use dynamic programming vs. greedy algorithms?

Dynamic programming is appropriate when the problem has overlapping subproblems and optimal substructure. Greedy algorithms work when local optimal choices lead to global optimal solutions. For example, coin change with unlimited coins of standard denominations (US coins) works greedily, but arbitrary denominations require DP.

Q: How do I recognize dynamic programming problems in interviews?

Look for keywords like “maximum/minimum,” “number of ways,” “longest/shortest subsequence,” or constraints that suggest exponential solutions. Problems where you make sequential decisions that affect future choices are often DP problems.

Q: What’s the best way to practice dynamic programming with Python?

Start with classic problems (Fibonacci, climb stairs, house robber) before moving to medium and hard problems. Implement both top-down and bottom-up solutions. Use Python’s built-in @lru_cache decorator for quick memoization during practice, but learn to implement caching manually for interviews.

Q: How do I handle large constraints in dynamic programming problems?

For large constraints, focus on:

  • Space optimization (rolling arrays)
  • Using appropriate data structures (deques for sliding windows)
  • Considering meet-in-the-middle for very large state spaces
  • Using iterative over recursive approaches to avoid stack limits

Q: What are common mistakes in dynamic programming with Python?

Common mistakes include:

  • Incorrect base cases
  • Off-by-one errors in array indices
  • Not handling edge cases (empty input, single element)
  • Using mutable default arguments in memoization
  • Forgetting to reset memoization between test cases

Frequently Asked Questions

Q: Can dynamic programming solve all optimization problems?

No, dynamic programming only works for problems with optimal substructure and overlapping subproblems. Problems without these properties require different approaches like greedy algorithms, divide-and-conquer, or specialized techniques.

Q: How do I choose between memoization and tabulation?

Memoization is often easier to implement from a recursive solution and only computes needed states. Tabulation typically uses less space (no recursion stack) and can be more efficient when all states must be computed. Start with memoization, optimize to tabulation when needed.

Q: What Python features are most helpful for dynamic programming?

Python’s dictionary for memoization, @lru_cache decorator, list comprehensions for DP table initialization, and the ability to use negative indices are invaluable. Python’s readability also makes debugging DP solutions easier.

Q: How many dynamic programming problems should I practice?

Aim for 30-50 diverse DP problems covering different patterns (Fibonacci, knapsack, LCS, etc.). Quality matters more than quantity—truly understanding why the solution works is crucial.

Q: Where can I find more dynamic programming practice problems?

Platforms like LeetCode (DP section), HackerRank (DP track), and Codeforces offer hundreds of problems. Our Building Problem-Solving Skills as a Developer | Engineering Mindset guide provides additional resources and strategies.

 

Conclusion

Dynamic programming with python examples transforms intimidating algorithmic challenges into solvable problems through systematic thinking and code. We’ve explored everything from basic memoization to complex optimizations, all with practical, runnable Python code.

The key takeaways:

  • Recognize DP problems by their optimal substructure and overlapping subproblems
  • Master both top-down (memoization) and bottom-up (tabulation) approaches
  • Practice pattern recognition across Fibonacci, grid, knapsack, and string problems
  • Debug systematically by checking base cases and indices
  • Optimize progressively from brute force to space-efficient solutions
     

Remember, becoming proficient in python dynamic programming takes practice. Each problem you solve builds intuition for the next. Start with the examples here, then tackle problems on platforms like LeetCode, always asking yourself: “Can I break this into smaller subproblems? Will caching help?”

For a complete learning path, explore our Complete Data Structures & Algorithms Series and Mastering Data Structures for Coding Interviews | Step-by-Step Roadmap. With consistent practice and the systematic approach outlined here, you’ll be solving DP problems with confidence in no time.


Related Posts

Binary Search Explained: Algorithm, Examples, & Edge Cases

Master the binary search algorithm with clear, step-by-step examples. Learn how to implement efficient searches in sorted arrays, avoid common …

Mar 11, 2026
How to Approach Hard LeetCode Problems | A Strategic Framework

Master the mental framework and strategies to confidently break down and solve even the most challenging LeetCode problems.

Mar 06, 2026
Two Pointer Technique | Master Array Problems in 8 Steps

Master the two-pointer technique to solve complex array and string problems efficiently. This guide breaks down patterns, provides step-by-step examples, …

Mar 11, 2026

Need Coding Help?

Get expert assistance with your programming assignments and projects.