# Recursion for (almost) non-programmers

“Every man has a right to a glass of wine; but when you drink a glass of wine, you become another man”

Grigore C. Moisil

Recursion is a powerful and yet not an easy concept to grasp for the beginning non-programmers, so I decided to give my wife (which was never before exposed to technical/mathematical education or programming) a taste of what I was going to tell my students and check if I was moving in the right direction. After about half an hour of explanations, she could (with minor help from my side) solve some basic recursive problems without delving into much implementation details.

So, overall, most computer programs are built up from the following structures: sequence of statements (a block), decision points (conditionals) and loops. It is clear that conditionals along with jumping are enough to represent loops of any complexity:

label1:
// code
if (condition) then jumpto label1


should be equivalent with the loop

repeat
// code
until (not condition)


Since some code could be repeatedly used, it is easier to pack it into a function and call it in a single line rather than having the same block of code reappear every now and then. A typical function is defined by its signature,

int f (int n){
// code
}


i.e. its name (f), its type (int) and its parameters (here, only one parameter of integer type n).

Now, a recursive function is a function which calls itself. While this definition does not seem to be particularly illuminating at this point, let’s consider the following mathematical function – the factorial:

$n! = n\cdot(n-1)\cdot(n-2)\cdot ... \cdot 2\cdot 1.$

That is, for example, if we want to find $5!$, then we would have to proceed as follows:

$5! = 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1 = 120.$

Let’s take a closer look at those multiplications. Since $4 \cdot 3 \cdot 2 \cdot 1$ is actually the same as $4!$, we could rewrite $5!$ as:

$5! = 5 \cdot 4! .$

Following the same logic, $4! = 4 \cdot 3!$, $3! = 3 \cdot 2!$, $2! = 2 \cdot 1!$ and $1!$ is, well, simply 1. Now, this trivial rewriting has a huge implication for our discussion: to compute the factorial function for some number, we just need to multiply this number with the factorial of this number minus 1. This is where recursion comes into play, since we have represented a problem in terms of its subproblem.

Hence, generally, recursion is a concept which can be applied when some problem can be expressed in terms of its subproblems (where subproblem should similar to and intuitively be easier to solve than the general problem), some way of combining the results of the subproblems to form the solution of the general problem, a termination condition and some sort of parameters (re)computation, so that in every instance of unfolding the recursive function, we would be closer to the terminal condition. Graphically, this could be represented in the following way:

As we see, the general problem G is divided into three subproblems Sub1, Sub2 and Sub3 (could be less, could be more) after the parameters of G are transformed so that they could be passed down to the recursive calls (Sub1-Sub3). Finally, after Sub1-Sub3 are solved, respectively, they are combined by some abstract operation to form a solution of the general problem G. Parameter transformation and the termination condition combined have to guarantee termination of the process of recursion unfolding, where the function repeatedly calls itself.

Put otherwise, a recursive process consists of two phases – reduction and construction. Parameter recomputation and division to subproblems corresponds to the process of reduction, whereas combining the results of recursive calls and propagating them back would be the construction. In the picture, the reduction process is a top-down process, and the construction is a bottom-up process, respectively.

Let’s apply this abstract notion to the problem we discussed earlier, namely computation of the factorial function. Generalizing the notion of factorial in the equation, where each consecutive factorial function is defined as its parameter multiplied with the factorial of the decremented parameter, we can write it in the recursive form :

$n! = n \cdot (n-1)!\\ 1! = 1.$

Here, we both have the recursion step for the case when the function still needs to translate the parameter and pass it down to the new instance of itself and then combine the results (n*f(n-1)), and the termination condition if n=1. Translating this into a C-like notation is then straightforward:

int f (int n){
if (n == 1) // termination condition
return 1; // function terminates with the value of 1
else return n*f(n-1); // parameter has not reached terminal value
// recursive step is taken
}


It is clear that the arithmetic expression $n*f(n-1)$ cannot be solved, unless the value of $f(n-1)$ is computed. This process of unfolding continues until the terminal condition is reached:

$f(5) = 5*f(4) = 5*4*f(3) = 5*4*3*f(2) = 5*4*3*2*f(1) = 5*4*3*2*1.$

and then the results are multiplied backwards, propagating consecutively back to the called functions, that is, 1 is returned to the function $f(2)$ which multiplies it by 2 and returns the result to $f(3)$ which multiplies it by 3 and propagates it back to $f(4)$ which multiplies it by 4 and… Well, you’ve got the idea!

Returning to our abstract picture, our combination operation is a simple multiplication of the recursive function call with the function’s parameter since there is only one recursive call per function. Parameter preparation is trivial and is the decrement operator $n \rightarrow n-1$. The termination is clearly guaranteed, since n gets smaller with every step and will at some point reach the value of 1 (assuming that n was initially positive and greater than 1, of course).

Let’s take another classical example: Fibonacci sequence. As per definition,

$f(0) = 0\\ f(1) = 1\\ f(n) = f(n-1) + f(n-2).$

The mathematical formula is already quite suggestive, so we can immediately see two termination conditions: for the parameter values equal to 1 or 0, and the recursion step which now consists of two recursive calls with the parameters changed according to rules $n\rightarrow n-1$ and $n\rightarrow n-2$, and the combination operation is the algebraic addition. Translating this into the C-like notation gives us:

int f (int n){
if (n == 1) // termination condition
return 1; // function terminates with the value of 1
else if (n == 0) // another termination condition
return 0; // function terminates with the value of 0
else return f(n-1) + f(n-2); // parameter has not reached terminal value
// recursive step is taken
}


The third and final example will demonstrate the case where the combination operation is empty and the result is simply consequently propagated to the first calling function. Let us write a function which decides whether a given number is a prime number (divides only itself and 1 without a remainder) or not.

Such a function would have a slightly more complicated signature, since we need to pass down both the actual divisor and the dividend, and the returning type would be boolean (either true or false). Now, a number is a prime if, divided by all possible divisors in the range from 1 to the number itself-1, it only divides by 1 without a remainder. That is, the terminal condition would be reaching 1 as a divisor, starting from the number n-1. And the recursive step is done by checking if the actual divisor divides the number without a remainder. If it does, then the number n is not a prime. Else we have to check further by decrementing the divisor and passing it down recursively:

bool isPrime (int divisor, int n){
if (n == 1) // termination condition
return true; // function terminates with the value of 1
else if (n % divisor == 0) return false;
else return isPrime(divisor-1, n); // divisor has not reached its terminal value
// recursive step is taken
}


Although no one would ever implement isPrime recursively, it is done merely for demonstration purposes here.

As an exercise, the reader is invited to implement a recursive function for computing the multiplicative persistence of a given number. The combination operation would be somewhat similar to our factorial function, however, parameters adaptation would be slightly more complicated. The multiplicative persistence of a number n is the number of times you must multiply the digits in n until you reach a single digit.

For example:
persistence(93)=3, because 9*3 = 27, 2*7 = 14, 1*4=4, and 4 has only one digit
persistence(5)=0, because 5 is already a one-digit number