# Recurrence Relations

## Table of contents

- Binary search (intuitive)
- Binary search
- Unrolling recurrences
- Merge operation
- Merge sort walkthrough
- Merge sort
- Merge sort analysis
- Recurrence diagrams
- Common recurrences
- method1
- g0
- help

## Binary search (intuitive)

## Binary search

Let’s re-analyze the runtime for binary search on a sorted array, but this time looking at the recursive implementation. The `binarySearch`

method below returns the index of the search `target`

or -1 if the `target`

is not in the `sorted`

array.

```
static int binarySearch(int[] sorted, int target) {
return binarySearch(sorted, target, 0, sorted.length);
}
static int binarySearch(int[] sorted, int target, int low, int high) {
if (low > high) {
return -1;
}
int mid = low + (high - low) / 2;
if (sorted[mid] < target) {
return binarySearch(sorted, target, mid + 1, high);
} else if (sorted[mid] > target) {
return binarySearch(sorted, target, low, mid - 1);
} else {
return mid;
}
}
```

## What is the best-case order of growth of the runtime?

In the best case, the `target`

is the exact middle item in the `sorted`

array regardless of the length of the array. Since it’s not necessary to recurse to either the left or right, the runtime is just constant.

It turns out that the worst-case order of growth of the runtime for `binarySearch`

is logarithmic with respect to the length of the `sorted`

array. In each recursive call of the loop, half of the remaining elements under consideration can be ignored. In other words, the number of recursive calls is given by the answer to the question, “How many times do we need to divide *N* (length of the `sorted`

array) by 2 until only 1 element remains?” This is the definition of the **binary logarithm** (log_{2} or simply lg).

- Logarithm base
- Computer scientists often don’t specify the logarithm base. A common assumption is to work in base-2 (binary logarithm) since it corresponds to how bits are represented in a computer. For the purpose of asymptotic analysis, the log base doesn’t affect the result because all log bases are within a constant multiplicative factor of each other.

How might we prove that the worst-case order of growth of the runtime of `binarySearch`

is in *Θ*(log *N*)? We have a formal, mathematical definition for big-theta notation, so it’s nice to be able to use it!

**Recurrence relations** are recursive functions that model non-recursive work and recursive work.

- Recurrence relation for the worst-case runtime of
`binarySearch`

*T*(*N*) =*T*(*N*/2) +*c*for*N*> 1*T*(1) =*d*

*c* represents the constant time spent on non-recursive work, such as comparing `low < high`

, computing `mid`

, and comparing the `target`

with `sorted[mid]`

. *T*(1) = *d* represents the base case, which takes a different amount of constant time to compare `low < high`

and immediately `return -1`

. The time spent on recursive work is modeled by *T*(*N*/2) because a recursive call to `binarySearch`

will examine either the lower half or upper half of the remaining *N* items.

Our goal is to rewrite this recurrence relation in a closed-form expression that’s compatible with asymptotic notation definitions. One approach is to **unroll the recurrence**: plug the recurrence back into itself until the recursion is done.

*T*(*N*) =*T*(*N*/2) +*c**T*(*N*) =*T*(*N*/4) +*c*+*c**T*(*N*) =*T*(*N*/8) +*c*(3)- …
*T*(*N*) =*T*(*N*/*N*) +*c*(log*N*)*T*(*N*) =*d*+*c*(log*N*)

We can then apply big-theta notation to describe the order of growth of the closed-form expression for *T*(*N*), the worst-case runtime of `binarySearch`

.

## Unrolling recurrences

While this course is not focused too heavily on the mathematics behind solving recurrences, it’s useful to have an intuitive understanding of a recurrence relation and be able to compute basic closed-form solutions. Recognize and apply these two identities to simplify a series.

1 + 2 + 3 + 4 + … is approximately the square of the last term: 1 + 2 + 3 + 4 + ⋯ + *N* = *N*(*N* + 1) / 2. For a visualization of this identity, look back at our worst-case asymptotic runtime analysis of `dup1`

.

1 + 2 + 4 + 8 + … is 1 less than double the last term: 2^{0} + 2^{1} + ⋯ + 2^{k} = 2^{k + 1} − 1.

WolframAlpha can compute the asymptotic bound for any recurrence relation, e.g. T(N) = T(N / 2) + 1. Did you recognize this recurrence as binary search? As you get more familiar with recurrences, you’ll start to see them more and more often.

For these questions, consider the recurrence relation *T*(*N*) = *T*(*N*/2) + *cN* and *T*(1) = *d*.

### Question 1

Which of the following variants of `binarySearch`

could have a runtime represented by the recurrence relation?

- Recursive
`binarySearch`

as presented earlier. - Recursive
`binarySearch`

but also printing out the value of`sorted[mid]`

. - Recursive
`binarySearch`

but also printing out the range of values`sorted[low]`

to`sorted[high]`

. - Recursive
`binarySearch`

but also printing out the entire`sorted`

array.

## Explanation

Options A and B are equivalent in terms of the asymptotic bound since adding another constant-time operation to print out the `sorted[mid]`

value won’t affect the overall asymptotic analysis.

Option C represents a program that does *N* additional work in each recursive call, where *N* represents the size of the current subproblem.

Option D represents a program that does *N* additional work in each recursive call, where *N* represents the size of the original problem. *T*(*N*) describes the runtime for the current subproblem of size *N*, not the original problem! The overall asymptotic runtime bound for this would be *Θ*(*N*log *N*).

### Question 2

Unroll the recurrence and identify the big-theta asymptotic bound.

*Θ*(1)*Θ*(log*N*)*Θ*(*N*)*Θ*(*N*log*N*)

## Explanation

*T*(*N*) =*T*(*N*/2) +*c**N**T*(*N*) =*T*(*N*/4) +*c*(*N*/2) +*c**N**T*(*N*) =*T*(*N*/8) +*c*(*N*/4) +*c*(*N*/2) +*c**N*- …
- $T(N) = d +
*c*[*N*+*N*/2 +*N*/4 + ⋯ + 1] *T*(*N*) =*d*+*c*[2*N*− 1] (via identity)*T*(*N*) ∈*Θ*(*N*)

## Merge operation

**Merge** is an algorithm building-block for merge sort, our next case study for recurrence relations.

### Question 1

What is the overall order of growth for the runtime of **merge** with respect to *N*, the total number of items in both of the sorted arrays?

*Θ*(1)*Θ*(log*N*)*Θ*(*N*)*Θ*(*N*log*N*)

## Merge sort walkthrough

## Merge sort

Merge sort is a recursive sorting algorithm that we’ll use as a case study of recurrence relations. It relies on the **merge** operation, which takes two **sorted** arrays and returns a sorted result containing all of the items in both of the given arrays.

- Merge sort
- If the array is of size 1, return.
- Recursively merge sort the left half.
- Recursively merge sort the right half.
- Merge the two sorted halves.

- Recurrence relation for the runtime of merge sort
*T*(*N*) =*T*(*N*/2) +*T*(*N*/2) +*c*_{1}*N*+*c*_{0}for*N*> 1*T*(1) =*d*

Each call to merge sort makes two recursive calls to subproblems of half the size each and then spends linear time merging the sorted halves. Unrolling this recurrence is a bit trickier since there are two recursive branches. Another approach is to draw a **recurrence diagram** that represents the recursive work and non-recursive work in the recurrence relation!

The top layer takes about 64 units of time merging two sorted halves of 32 items each. The second layer also takes about 64 units to time merging four sorted halves of 16 items each. By identifying the pattern in the recurrence diagram, we can see that each level will take about 64 units of time.

- Level
- All of the nodes at the same depth (number of edges) from the overall root.

Since the entire runtime of merge sort is represented by this diagram, we can find the total time spent by multiplying the number of levels in the tree by the time spent on each level. Each level divides the input size in half until the base case of 1 item is reached so there are log_{2}*N* levels. Each level takes about N time to merge many small arrays. Overall, the order of growth of the runtime for merge sort is in N log_{2}*N*.

## Merge sort analysis

## Recurrence diagrams

Merge sort is an example of a **work-per-level recurrence**. We found that each level in the recurrence diagram performed roughly the same amount of (non-recursive) work. The sum of all of the levels resulted in the total amount of time merge sort takes to run, including all recursive work (each node in the diagram) and all of the non-recursive work (the merge operation). Two big assumptions enabled us to use multiplication as a shortcut to simplify the final result.

- What is the (non-recursive) work done by each node on level
*i*? How many total nodes are on each level*i*?**Assumption 1**: Each node on level*i*performs the same amount of work, so we can multiply the work per node by the number of nodes on that level. - What is the total (non-recursive) work done on level
*i*? How many levels are in the tree?**Assumption 2**: Each level performs the same amount of work, so we can multiply the work per level by the number of levels to get the total runtime of the entire recurrence.

Even when these assumptions don’t hold, we can still go back to writing out the sum of the terms instead of using multiplication as a shortcut. **Work-per-node recurrences** represent a category of recurrences that don’t meet assumption 2. Consider `f3`

.

```
static int f3(int n) {
if (n <= 1)
return 1;
return f3(n - 1) + f3(n - 1);
}
```

### Question 1

Give a recurrence relation for the runtime of `f3`

. Try drawing out the first couple levels of the recurrence diagram!

## Explanation

*T*(*N*) = 2*T*(*N*− 1) +*c*- 2
*T*(*N*− 1): Two recursive calls to f3. Alternatively,*T*(*N*− 1) +*T*(*N*− 1). *c*: non-recursive work like comparing, plus, and return.*T*(1) =*d**d*: compare and return 1.

Optionally, we can solve the recurrence relation directly at this point and find the total runtime for `f3`

with respect to *N*. In the questions below, we’ll walkthrough this logic from a different direction by relying on the simplifying assumptions above. You might prefer one approach over the other. Here, we’ll simplify *c* and *d* and just treat them as 1 since our asymptotic analysis doesn’t care about the difference between constants.

*T*(*N*) = 2*T*(*N*− 1) + 1*T*(*N*) = 2[2*T*(*N*− 2) + 1] + 1*T*(*N*) = 2[2[2*T*(*N*− 3) + 1] + 1] + 1*T*(*N*) = 2^{N − 1}+ 2^{N − 2}+ 2^{N − 3}+ ⋯ + 2^{0}*T*(*N*) = 1 + 2 + 4 + ⋯ + 2^{N − 1}*T*(*N*) = 2^{N}− 1*T*(*N*) ∈*Θ*(2^{N})

### Question 2

What is the (non-recursive) work done by each node on level *i*?

- Constant
- Logarithmic with respect to the size of the current subproblem.
- Linear with respect to the size of the current subproblem.

## Explanation

If we use our explanation from the recurrence relation, the non-recursive work for both *T*(*N*) and *T*(1) are represented by the constants *c* and *d*, respectively.

### Question 3

How many total nodes are on each level *i*?

Assume the overall root node is at level 0, its immediate children at level 1, and so forth.

- 1
*i*- 2
*i* - 2
^{i}

## Explanation

Each call to `f3`

creates two more calls to `f3`

! Therefore, on level *i*, there will be 2^{i} nodes.

### Question 4

Although assumption 1 is true and we can multiply the answers to the above two questions to get the work-per-level, we’ll see that the work-per-level is not all the same.

Give a closed-form expression for *T*(*N*) as a sum of all of the levels in the recurrence diagram. For simplicity, replace the constants *c* and *d* with 1.

Then, apply a mathematical identity to find a simple big-theta runtime bound.

## Explanation

Although assumption 2 doesn’t hold, we can still use the sum of successive powers of 2 to solve the problem. This is a work-per-node recurrence analysis because what we’ve ultimately done is counted the number of nodes in the recurrence diagram and multiplied it by the (non-recursive) work done per node. There are 2^{N} − 1 nodes in the diagram and each node performs a constant amount of work.

## Common recurrences

Here’s a list of common recurrences and their simplified big-theta asymptotic bounds.

### Single recursion

*T*(*N*) =*T*(*N*/2) + 1 ∈*Θ*(log*N*)*T*(*N*) =*T*(*N*− 1) + 1 ∈*Θ*(*N*)*T*(*N*) =*T*(*N*/2) +*N*∈*Θ*(*N*)*T*(*N*) =*T*(*N*− 1) +*N*∈*Θ*(*N*^{2})

### Multiple recursion

*T*(*N*) = 2*T*(*N*/2) + 1 ∈*Θ*(*N*)*T*(*N*) = 2*T*(*N*− 1) + 1 ∈*Θ*(2^{N})*T*(*N*) = 2*T*(*N*/2) +*N*∈*Θ*(*N*log*N*)*T*(*N*) = 2*T*(*N*− 1) +*N*∈*Θ*(2^{N}) solved on the Math Stack Exchange

## method1

Consider the following method.

```
static int method1(int N) {
if (N <= 1) {
return 1;
} else {
int a = method1(N - 1);
int b = method1(N - 1);
int c = method1(N - 1);
return a + b + c;
}
}
```

### Question 1

Give a recurrence relation that describes the runtime of `method1`

.

### Question 2

True/False: If we solved the recurrence diagram, we can use **assumption 1** in a work-per-level analysis.

- Assumption 1
- Each node on level
*i*performs the same amount of work, so we can multiply the work per node by the number of nodes on that level.

## Explanation

Each node contributes the same amount of (non-recursive) work, so we can multiply by the number of nodes in the level.

### Question 3

True/False: If we solved the recurrence diagram, we can use **assumption 2** in a work-per-level analysis.

- Assumption 2
- Each level performs the same amount of work, so we can multiply the work per level by the number of levels to get the total runtime of the entire recurrence.

## Explanation

Each level contributes a different amount of work, so we can’t multiply the work-per-level by the number of levels.

## g0

Consider the following method. Assume `k(N)`

runs in constant time and returns a boolean.

```
static void g0(int N) {
if (N == 0)
return;
g0(N / 2);
if (k(N))
g0(N / 2);
}
```

### Question 1

True/False: In the best-case runtime analysis for `g0`

, `k(N)`

always returns false.

## Explanation

There are two factors that affect the runtime of `g0`

: the value of `N`

and the return value of `k(N)`

.

Since we chose `N`

as the asymptotic variable, that leaves the value of `k(N)`

for case analysis. If `k(N)`

always returns false, then each call to `g0`

only makes one more recursive call to `g0`

.

### Question 2

Identify the best-case order of growth for the runtime of `g0`

with respect to `N`

.

*Θ*(1)*Θ*(log*N*)*Θ*(*N*)*Θ*(*N*log*N*)*Θ*(*N*^{2}*Θ*(*N*^{2}log*N*)*Θ*(*N*^{3})

## Explanation

In the best case, `k(N)`

always returns false so each recursive call only creates one more recursive call, so *T*(*N*) = *T*(*N*/2) + 1 ∈ *Θ*(log *N*).

### Question 3

Identify the worst-case order of growth for the runtime of `g0`

with respect to `N`

.

*Θ*(1)*Θ*(log*N*)*Θ*(*N*)*Θ*(*N*log*N*)*Θ*(*N*^{2}*Θ*(*N*^{2}log*N*)*Θ*(*N*^{3})

## Explanation

In the worst case, `k(N)`

always returns true so each recursive call creates two more recursive calls, so *T*(*N*) = 2*T*(*N*/2) + 1 ∈ *Θ*(*N*).

## help

Consider the following method.

```
static int help(int N) {
if (N < 1000) {
for (int i = 0; i < N * N * N; i += 1)
System.out.print("be");
return x;
} else {
for (int i = 0; i < N / 2; i += 1)
System.out.print("be");
return 2 * help(N / 2);
}
}
```

### Question 1

True/False: In the best-case runtime analysis for `help`

, `N`

is less than 1000.

## Explanation

There’s only one factor that affects the runtime of the program: the value of `N`

. Since we chose to perform asymptotic analysis with `N`

as the asymptotic variable, `N`

must be very large: “for all values of *N* greater than *N*_{0}.”

It’s also problematic to discuss case analysis for this problem since there was only one factor that can affect the program’s runtime. Since we already chose that factor as the asymptotic variable, there are no other factors to vary for best/worst case analysis.

### Question 2

Give the order of growth for the runtime of `help`

with respect to `N`

.

*Θ*(1)*Θ*(log*N*)*Θ*(*N*)*Θ*(*N*log*N*)*Θ*(*N*^{2}*Θ*(*N*^{2}log*N*)*Θ*(*N*^{3})*Θ*(2^{N})

## Explanation

A recurrence relation that describes the runtime of help is *T*(*N*) = *T*(*N*/2) + *N*.