Recurrence Relations for Divide and Conquer

Back to Ch 3

We looked at recursive algorithms where the smaller problem was just one smaller.  For some algorithms  the  smaller problems are a  fraction of  the  original problem size.

Worst times

Merge Sort: T(n) = 2T( ⌈n/2⌉) + Θ (n)

Binary search: T(n) = T( ⌈n/2⌉) + Θ (1)


All have form related to

T(n) = a T( ⌈n/b⌉) + nd

We can generalize from cases where n = bk, avoiding floor and ceiling functions.  (The book develops this more in the appendix.  Their approach with  eventually nonecreasing functions makes the analysis easier, but could be avoided with a fancier argument.  For the algorithms we consider, it make heuristic sense that a bigger problem is harder to solve, but it is a bit unsatisfing to assume that before we work out the function!  Still, for simplicity we will stick with this approach, which does include the main idea.)


Note:  You will not need to reproduce the derivation of the Master Theorem, below.  You are responsible to be able to do the kind of
algebra and logic in individual steps, as they come up in other (much smaller!) problems.  Where I say X simplifies to Y, it may take
one step, or it may take several basic algebraic steps.  Have a pencil handy.  We will use exponents and logs a lot in this course. These are high school topics that many students have not kept up with.  Following these steps will be slow, but you will review essentially every kind of thing we do with exponents and logs, and also with geometric series.  Keep the summary of log and exponent rules handy.

T(bk) = a T(bk-1) + bkd

= a[aT(bk-2) +  b(k-1)d] + bkd

… much like the examples in the book. Another way is to think in term of the number of problems at each level of recursion, and the size of of all their inputs:

The work for a problem is the work outside the recursion plus the work from all the recursive calls.

Top level:

1 problem input size: bk for each the work outside recursion: (bk)d = bkd; for the whole level, outside recursion: (1)bkd

1 down:

a problems input size: bk-1 for each the work outside recursion: b(k-1)d ; for the whole level, outside recursion: (a)b(k-1)d

2 down:

a2 problems input size: bk-2 for each the work outside recursion: b(k-2)d ; for the whole level, outside recursion: (a2)b(k-2)d

k-1 down:

ak-1 problems input size: b1 for each the work outside recursion: b(1)d ; for the whole level, outside recursion: (ak-1)b(1)d

k down:

ak problems input size: b0 for each the work total (no recursion): b(0)d ; for the whole level, total: (ak)b(0)d

In total from all levels of recursion:

(1)bkd + (a)b(k-1)d +... + (a2)b(k-2)d + (ak-1)b(1)d + (ak)b(0)d

The general term for the ith nested recursion

(ai)b(k-i)d
=  (ai)bkd-id    doing exponent arithmetic
= (ai)bkd/bid    using exponent subtraction rule
= (a/bd)i)bkd   if collect i powers

so this is a term in a geometric series with ratio (a/bd)

There are three cases. Comparing the ratio to 1:  It is equal to, less than, or greater than, 1:

Equal 1: Case a/bd = 1 or equivalently a = bd or equivalently d = log b a.

Then there are k+1 equal terms in the sum and

T(bk) = bkd (k +1)

If (a/bd) is not 1:

Sum the geometric series:

T(bk) = bkd (1 – (a/bd)k+1) / (1 – a/bd)

This version will be useful if a/bd < 1.

We can get a version more useful if a/bd > 1 by multiplying out with the bkd, and multiplying numerator and denominator by -1

T(bk) = ((a/bd)ak) – bkd) / (a/bd - 1)

We can restate these in terms of n = bk, using nd = bkd , and k = log b n = lg n / lg b. This means

T(n) = nd (lg n / lg b + 1) if a = bd.

T(n) = nd (1 – (a/bd)(lg n/lg b) + 1) / (1 – a/bd) if a < bd

T(n) = ((a/bd)alg n/lg b – nd) / (a/bd - 1) if a > bd


We usually work with orders and are not interested in these special exact results. If only worrying about order, we would like to switch the starting recursive relationship to

T(n) = a T( ⌊n/b⌋) + f(n), where f(n) ∈ Θ(nd).

and the same orders for the answer will work out. (I will skip the details.) Let us simplify the order expressions for the three cases.


1. Case a/bd = 1:

T(n) ∈ Θ(nd (lg n / lg b + 1))

easily simplifies to

T(n) ∈ Θ(nd log n )


2. Case a/bd < 1:

T(n) ∈ Θ((nd (1 – (a/bd)(lg n/lg b) + 1) / (1 – a/bd))

As n go to infinity the part (a/bd)(lg n/lg b)

approaches 0 and

so T(n) ∈ Θ(nd)


3. Case a/bd > 1:

Taking logs for this case this means

lg a – d lg b > 0

so

(lg a)/(lg b) > d.

T(n) ∈ Θ(((a/bd)alg n/lg b – nd) / (a/bd - 1) ) if a > bd

We need to analyze the part

alg n/lg b

Use log/exponent rules to see that a = b(lg a/lg b) and n = b(lg n/lg b) so

alg n/lg b = (b(lg a/lg b))(lg n/lg b)

= b(lg n/lg b)(lg a/lg b)

= (b(lg n/lg b))(lg a/lg b) = n(lg a/lg b)

Since in this case, lg a / lg b > d, the order of n(lg a/lg b) is higher than the order of nd and after simplifying, we can ignore the lower order term, and

T(n) ∈ Θ(n(lg a/lg b))

- - - - - - - - -

We can put this all together now. It is somewhat shorter to state if we introduce the critical exponent,

E = lg a / lg b

In terms of E:

T(n) ∈ Θ(nd) if d > E

T(n) ∈ Θ(nd log n) if d = E

T(n) ∈ Θ(nE) if d < E


We can combine the first and third cases to get

The Master Recursion Theorem

If T(n) is eventually nondecreasing and

T(n) = a T(⌊n/b⌋) + f(n), for all positive n, where f(n) ∈ Θ(nd),

and we let

E = log b a = lg a / lg b

then

T(n) ∈ Θ(nd log n) if d = E

T(n) ∈ Θ(nmax(d, E)) if d ≠ E


The Theorem is also true if the floor function is replaced by ceiling and/or if the class Θ is replaced uniformly by O or Ω.

Note: In future I will assume integer division in such recurrence relations (same as applying floor function) if I leave out floor or ceiling operations around the division.

This may be the theorem we use most often in the class!  Preferably remember it.  At least have it handy and keep it in mind!

Examples to do in class:

T(n) = 2T(⌊n/2⌋) + n2
T(n) = T(⌊2n/3) + 1
T(n) = 3T( ⌈n/4⌉) + 5n + 7
T(n) = 8T(⌊n/2⌋) + n2

Note: On exams, where I am not giving you a calculator, I am likely to keep the arithmetic simple with examples like the last one, but this is only "simple" if you remember your exponent rules!

The Θ version of the theorem does not work directly for
T() = 2T(⌈n/2⌉) + n lg n

What big Oh statement can you make?

On to More Sorts and Divide and Conquer.