Fibonacci series and Dynamic programming

This article has been published at Function Space.

(This is just my back up copy of the thing, do check this out on FS for better experience.)

One of the most beautiful occurrences of mathematics in nature are Fibonacci numbers[1]. Named after Leonardo Fibonacci, this series dates back to ancient history with origins in Indian mathematics [2].

The beauty of the series is enhanced by it’s simple definition, x(n+2) = x(n+1) + x(n)

So starting from x(0) = 0 and x(1) = 1, the series progresses infinitely as: 0,1,1,2,3,5,8,13…

The problem we’ll be discussing in this article is: How do we go about finding the nth Fibonacci number?

The definition above leads to an intuitive recursive algorithm as follows:

Fib(n)
{
    if (n == 0)
        return 0;

    if (n == 1)
        return 1;

    else
        return Fib(n-1) + Fib(n-2);
}

Looks beautiful! But what’s the complexity?

Let T(n) be the complexity of the above algorithm. T(n) can be written in the form of a recurrence relation as follows:

T(n) = T(n-1) + T(n-2)

where, T(1) and T(0) are O(1). To solve the above recurrence, we shall use substitution method. We observe that the above recursion grows exponentially. One problem creates two sub-problems which in turn create two sub-problems each and so on. So, for our guess in substitution method, we assume T(n) to be λn(exponential).

λn=λn−1+λn−2

λ2=λ+1

Solving this, we get λ=1+5√2
(rejecting the negative solution)

This number is in fact another mathematical jewel, known as Golden ratio. Successive numbers in Fibonacci series approximate Golden Ratio! Coming back to our problem.. We plug this value back to our assumption and get the complexity of the above algorithm as (1+(√5)2)n, indeed exponential. Hence, we found a simple solution to find nth Fibonacci number by paying a huge price. At this point of time, the usual question that any algorithm designer asks is “Can we do better?”.

Indeed! Dynamic programming to the rescue!

Let’s make a couple of observations about the above algorithm. Below is the tree that will be formed while solving Fib(5).

alt text

You can observe that to compute Fib(5), we are calculating Fib(3) twice, Fib(2) thrice and Fib(1) five times. This is just the case of Fib(5) imagine calculating 100th Fibonacci number, or millionth! Repeating the exact same computation many times is a cry for help – We can save all this effort by computing only once and re-using every other time. We can achieve that by storing the results the first time we compute them and instead of doing the same computation again, we can just look it up from the stored memory. This technique of avoiding repeated calculation of results by storing them is called memoization and is used in Dynamic Programming.

The resultant algorithm using memoization looks like this:

Fib(n)
{
    if (n == 0)
        return M[0];

    if (n == 1)
        return M[1];

    if (Fib(n-2) is not already calculated)
        call Fib(n-2);

    if(Fib(n-1) is already calculated)
        call Fib(n-1);

    //Store the ${n}^{th}$ Fibonacci no. in memory & use previous results.
    M[n] = M[n-1] + M[n-2] 

    Return M[n];
}

Here, instead of calling Fib(n-1) and Fib(n-2) we are first checking if they are already present in memory or not. And each computation stores the result in memory for future references.

Tree for this new algorithm will look something like this:

alt text

Here blue boxes represent results being used from memory and pink boxes represent results being computed. For computing Fib(n), there will be n−2 steps computing Fib(3), Fib(4), … , Fib(n) successively, each computation making use of previous computations. Hence, the order of complexity comes down to O(n) only, that is, from exponential to linear!

This is an example of a dynamic programming algorithm. The bigger problem is broken down into a series of sub-problems and their results are combined. A dynamic programming (DP) algorithm implicitly explores the space of all possible solutions by carefully decomposing things into a series of sub-problems and then building up correct solutions to larger and larger sub-problems[3]. Of course this is not all – Problem needs to have certain characteristics for it to be solvable using DP, like optimal substructure, overlapping sub-problems etc..

If you are familiar with Divide and Conquer paradigm of problem solving, you will notice the similarity of breaking down into sub-problems. The key difference here is that Divide and Conquer problems require the sub-problems to be of non-overlapping in nature. For example, sub-problems in Merge Sort algorithm, the list is divided into two non overlapping halves successively which are independently sorted and the results are merged together. Whereas in a typical Dynamic programming problem such as above, the sub-problems are overlapping as Fib(n) is dependent on Fib(n-1) & Fib(n-2); Fib(n-1) is dependent on Fib(n-2) along with Fib(n-3) and so on.

The other important property of a problem for it to be solvable using Dynamic programming is Optimal substructure. If a problem cab be broken down into sub-problems such that the optimal solution of its sub-problems can be used to obtain the optimal solution of larger and larger sub-problems, then the problem is said to have optimal substructure property. Our above example illustrates the benefit of memoization but optimal substructure property is better illustrated by taking an example of an optimization problem.

In future articles we will explore the power of Dynamic Programming by taking up optimization problems. We will look into certain problems which can’t be solved any other way and applications of DP. Stay tuned!

References :

[1] http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/fibnat.html

[2] Parmanand Singh, The so-called fibonacci numbers in ancient and medieval India, Historia Mathematica, Volume 12, Issue 3, August 1985, Pages 229-244, ISSN 0315-0860, http://dx.doi.org/10.1016/0315-0860(85)90021-7. (http://www.sciencedirect.com/science/article/pii/0315086085900217)

[3] Jon Kleinberg and Eva Tardos. 2005. Algorithm Design. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s