Dynamic Programming Tutorial

I’ll give you a topic: Dynamic programming is neither dynamic nor programming. Discuss!

Actually, dynamic programming refers to a class of algorithms (on par with divide and conquer and greedy algorithms). A popular interview trick is to throw a problem at a candidate that can easily be solved with dynamic programming. Those who come prepared will recognize the problem structure and ace the interview. Everyone else sweats and struggles to come up with a naive solution.

This article will show you how to create a dynamic programming algorithm using examples from actual interviews. Also, I will give you some tips for identifying problems that are best solved with dynamic programming in case they come up in your own interviews.

An Air of Mystery

My undergraduate computer science curriculum covered dynamic programming – just barely. It wasn’t in the textbook so my professor handed out a flyer with a single example: matrix chain multiplication. After hand-simulating it multiple times it still looked like magic to me. One classmate called the example “as clear as mud”.

Then at my first job after college, one of our software products was like a video game to train lumberjacks how to carve up a tree to maximize its value. It was the lamest video game ever – unless you were a lumberjack. At the end of the game the computer would show you the optimal solution. My boss had implemented that part with dynamic programming. I was impressed that he (or anyone else!) understood dynamic programming well enough to apply it.

Overlapping Subproblems Illustrated

I have since discovered an easy way to understand dynamic programming. It begins with a bad solution to a simple problem: the dreaded recursive Fibonacci. No doubt you’ve seen this before:

    int fibonacci(int n)
    {
        if (n <= 0)
            return 0;
        else if (n == 1)
            return 1;
        else
            return fibonacci(n-1) + fibonacci(n-2);
    }

I almost hate to use this example because everyone’s eyes glaze over as their minds race ahead to the iterative solution. Stay with me for a moment. It will be worth it.

Take a look at the call tree for the recursive fibonacci(5):

This thing is really a dog in both running time and stack space. The worst thing is that it repeatedly calls itself with the same argument to recompute a value that was already computed. Direct your attention to the center of the diagram. F(2) is called by both F(4) and F(3). If we could take advantage of that fact and only call it once, the diagram would sort of fold together like this:

That’s better but there’s still room for improvement. F(2) is still called twice. So is F(3). If we keep folding the diagram until all duplicates are removed then we end up with the Christmas tree looking thing:

See how that works? The idea is that we only call fibonacci(2) one time and reuse the result as necessary. The same goes for fibonacci(3), fibonacci(1), etc. An easy way to get started is to keep a table of results for every possible argument. Initialize it with some sentinel value such as -1. Before computing a result, check the table to see if it has already been computed. If so, simply return the result from the table. Otherwise, calculate it from scratch, store it in the table and return it.

    const int MAXFIB = 1000;
    int table[MAXFIB];

    // Call init_fibonacci first to initialize table
    void init_fibonacci()
    {
        int i;
        for (i = 0; i < MAXFIB; ++i)
            table[i] = -1;
    }

    int fibonacci(int n)
    {
        if (n < 0 || n >= MAXFIB)
            return 0;                   // Input out of range
        else if (table[n] < 0) {
            if (n == 0)
                table[n] = 0;
            else if (n == 1)
                table[n] = 1;
            else
                table[n] = fibonacci(n-1) + fibonacci(n-2);
        }
        return table[n];
    }

This is called the top-down approach to dynamic programming. The results stored in the table are said to be “memoized” (not “memorized” but you can think of it that way). When faced with a difficult problem it’s often easiest to come up with a recursive solution and add memoization. That should be good enough for interview work. The process of initializing the table and checking it is clunky though. It would be better to solve the deepest subproblems first and build on them to reach the top level result. That’s the bottom-up approach:

    int fibonacci(int n)
    {
        if (n < 0 || n >= MAXFIB)
            return 0;
        else {
            int i;
            for (i = 0; i <= n; ++i) {
                if (i == 0)
                    table[i] = 0;
                else if (i == 1)
                    table[i] = 1;
                else
                    table[i] = table[i-1] + table[i-2];
            }
        }
        return table[n];
    }

No more initializing the table. No more recursive calls. At this point you may recognize that it isn’t necessary to keep all n values in the table because only the results of the previous two iterations will be used. That observation will lead to the familiar implementation of a Fibonacci function (not shown here).

Lessons Learned

Fibonacci isn’t a very good example of a dynamic programming problem but it does illustrate a couple of important concepts. The first is that of overlapping subproblems. F(2) was repeatedly called when evaluating F(3) and F(4). Dynamic programming optimizes away overlapping subproblems by evaluating each once and storing the results in a table.

The other concept illustrated by Fibonacci is related to mathematical induction. We start with a basis where the result is known for a certain argument such as n=1. Then we take an inductive step where if we know the result for i-1, then we can use it to calculate the result for i. Fibonacci actually needs two basis results to get started: F(0)=1 and F(1)=1. This is because the inductive step needs the previous two results.

In recursion, the basis happens at the bottom of the call tree where the subproblem is so small that the answer is trivial and can be returned immediately. The inductive step is where we make a recursive call to a smaller subproblem.

In dynamic programming, the basis is used to seed the table with one or two initial values. Iteration is used to carry out the inductive step and fill out the rest of the table. When designing a dynamic programming algorithm, think clearly about the basis. Ask yourself what the smallest subproblem is and what the correct result would be. Then consider the inductive step. If you are given the answer for f(i-1), how can you use it to solve for f(i)?

Maximum Subarray Problem

Now let’s look at a couple of dynamic programming problems that were actually asked during software engineer interviews at Google.

“Given an array of both positive and negative real numbers, find the subsequence with the greatest sum.”

If only positive numbers were allowed this would be easy. The subsequence equal to the entire sequence would have the greatest sum. Negative numbers make it interesting. A small negative number between two positive subsequences might be worth including whereas a large negative number would ruin an otherwise positive run.

So we’re given an array a[n] containing n real numbers. What’s the shortest possible length of a subsequence? That’s a good question to ask the interviewer. It could be 0, 1 or 2. I think 1 makes the most sense so let’s go with that. The sum of the subsequence starting at a[0] of length 1 is simply a[0] (the sum of one element). That’s our basis.

Now for the induction step. My first instinct would be to assume we had the solution over the first i-1 elements. If s[i-1] is the sum of the greatest subsequence over a[0] through a[i-1], how could we use that to solve for s[i]? The answer is that we can’t. The maximum subsequence so far might be on the “left”, ending before a[i-1]. It would be a mistake to consider adding a[i] to this subsequence because that would make it non-contiguous. We need another approach.

Instead, let p[i-1] equal the maximum run ending at a[i-1]. It may or may not include a[i-2], etc. but it must include a[i-1]. Now the induction step is p[i] = max(p[i-1] + a[i], a[i]). At each step, we either choose to extend the current run or start a new one, whichever results in a greater value for p[i]. Here’s a rough draft:

    p[0] = a[0];                        // basis
    for (i = 1; i < n; ++i)
        if (p[i-1] + a[i] > a[i])       // let p[i] = maximum
            p[i] = p[i-1] + a[i];
        else
            p[i] = a[i];

The maximum subsequence must end somewhere. Now that we have the maximum sums for subsequences ending at each element, the solution is simply the maximum value in the array p. We could write a second loop to scan p and choose the highest value but it’s easy enough to do that in the loop we already have. Furthermore, by combining these two loops we can replace the p array with a single variable:

    float maxsubarray(int n, float a[])
    {
        float sum = a[0];
        float maxsum = sum;
        for (int i = 1; i < n; ++i) {
            if (sum > 0)        // Eliminated a[i] from both sides of inequality
                sum = sum + a[i];
            else
                sum = a[i];
            if (sum > maxsum)
                maxsum = sum;
        }
        return maxsum;
    }

This simple example illustrates the two most important clues that this is a dynamic programming question:

  1. You’re trying to maximize or minimize some value.
  2. The order of the elements is fixed (reordering is not allowed).

Minimum Palindrome Partitioning

Let’s finish with a more complex example. Again, this question is taken from an actual interview at Google:

“Given a string, split it into as few substrings as possible such that each substring is a palindrome.”

This is a really interesting problem. If you think there’s an easy solution then you may not understand the question. You may be tempted to write a greedy algorithm:

  • Find longest palindrome beginning at s[0]
  • Split it off
  • Repeat on remainder of string

That sounds good but consider this example: “ABAABABA”. The greedy algorithm will split it up like this: ABAABA + B + A. That’s three palindromes. The correct solution is two palindromes: ABA + ABABA.

You should recognize this as a dynamic programming problem because you’re trying to minimize some value (the number of palindromes) and the order is fixed (you may not rearrange the letters in the string).

Let’s use a one dimensional table m[i] to record the minimum number of palindromes in the substring beginning at s[0] and ending at s[i]. The first element m[0] = 1 indicates that the first element in the string s[0] is a single-letter palindrome by definition. That’s our basis. When we’re done, the final result will be in m[n-1] = the minimum number of palindromes in the whole string.

First we will find the minimum number of palindromes in the first two characters of the string, then three, and so on. Given m[i-1], the worst case scenario is m[i] = m[i-1] + 1. This means that we can do no better than splitting off s[i] into its own palindrome and adding it to the minimum number of palindromes that came before. To see if we can do better, consider substrings of all lengths that end at s[i]. For each one that is a palindrome, add 1 to the minimum number of palindromes that precede it and see if that’s a better result than what we already have.

This is the induction step. We use previously computed results of subproblems to solve larger and larger subproblems until we have a solution for the entire string.

    // return true if subsstring s from index left to right is a palindrome
    bool ispalindrome(string s, int left, int right)
    {
        // Ignore middle character in odd length strings
        while (left < right)
            if (s[left++] != s[right--])
                return false;
        return true;
    }

    // return the minimum number of palindromes into which s may be partitioned
    int minpalindromes(string s)
    {
        int n = s.size();

        // m[i] = minimum number of palindromes in s[0] through s[i]
        int m[n];               // Allocate n array
        m[0] = 1;               // Basis: s[0] is a single palindrome

        // Start at second character in string and work toward end
        for (int right = 1; right < n; ++right)
        {
            // Initialize m[right] to worst case
            // Try to improve upon it below
            m[right] = m[right-1] + 1;

            // Start at right-1 (substring of length 2) and work to front
            for (int left = right-1; left >= 0; --left)
                if (ispalindrome(s, left, right))
                {
                    // Substring from s[left] to s[right] is a palindrome
                    //
                    // If it starts from the beginning of the string then
                    // that's 1 palindrome. You can't do any better than that.
                    //
                    // Otherwise, add 1 for this palindrome to the minimum
                    // number of palindromes from s[0] to s[left-1]. If
                    // that's better than the current result, save it.
                    if (left == 0)
                        m[right] = 1;
                    else if (m[left-1] + 1 < m[right])
                        m[right] = m[left-1] + 1;
                }
        }

        return m[n-1];
    }

This takes O(n^3) time and O(n) space. Usually you can make a time-space tradeoff to get the running time of dynamic programming down to O(n^2). In this case, the trick is to replace the O(n) function ispalindrome() in the inner loop with a 2D lookup table ispalindrome[left, right]. We’ll only use the upper triangle of the matrix because left <= right (i.e we won’t consider substrings that go backwards). In fact, we won’t bother to set the M[i][i] diagonal either. Substrings of length one are always palindromes. You don’t need a table lookup to know that. The table itself can be initialized with dynamic programming from the bottom up. The key observation is that the substring from s[left] to s[right] is a palindrome if and only if s[left] == s[right] and the “interior” substring from s[left+1] to s[right-1] is also a palindrome. If there is only one (or zero) characters between left and right then we don’t need to check the table.

    int minpalindromes(string s)
    {
        int n = s.size();
        int m[n];
        m[0] = 1;

        // Allocate a 2D table of booleans.
        // Initialize it on the fly below.
        bool ispalindrome[n][n];

        for (int right = 1; right < n; ++right)
        {
            m[right] = m[right-1] + 1;
            for (int left = right-1; left >= 0; --left)
                // If substring begins and ends with the same letter and
                //     the interior substring is 0 or 1 characters or
                //     the interior substring is a palindrome
                // then the substring is a palindrome.
                // In any event, initialize ispalindrome[left][right]
                if (s[left] == s[right] && (right - left <= 2 || ispalindrome[left+1][right-1])) {
                    ispalindrome[left][right] = true;
                    if (left == 0)
                        m[right] = 1;
                    else if (m[left-1] + 1 < m[right])
                        m[right] = m[left-1] + 1;
                }
                else
                    ispalindrome[left][right] = false;
        }

        return m[n-1];
    }

Now it takes O(n^2) time and O(n^2) space. Much better.

Conclusion

The most important skill for a programming interview is to recognize the class of a problem so you can quickly get on the right path to a solution. The earmarks of a dynamic programming problem are:

  1. Optimizing a function over a sequence, especially when you must partition the entire sequence.
  2. The elements have a fixed left-to-right order: characters in a string, leaves in a tree, etc.
  3. Overlapping subproblems that will be evaluated repeatedly by a recursive solution.

When faced with a dynamic programming problem:

  1. Think about your basis and induction steps.
  2. Create a table to hold results of subproblems.
  3. If a recursive solution jumps out at you, write it down and add memoization.

Leave a Reply

Your email address will not be published. Required fields are marked *