# Recursive function theory

## 2 Plans towards a programming language

Well-known concepts are taken from [Mon:MatLog], but several new notations (only notations, not concepts) are introduced to reflect all concepts described in [Mon:MatLog], and some simplification are made (by allowing zero-arity generalizations). These are plans to achive formalizations that can allow us in the future to incarnate the main concepts of recursive function theory in a programming language.

## 3 Primitive recursive functions

### 3.1 Type system

$\left\lfloor0\right\rfloor = \mathbb N$
$\begin{matrix}\left\lfloor n + 1\right\rfloor = \underbrace{\mathbb N\times\dots\times\mathbb N}\to\mathbb N\\\;\;\;\;\;\;\;\;n+1\end{matrix}$

### 3.2 Base functions

#### 3.2.1 Constant

$\mathbf 0 : \left\lfloor0\right\rfloor$
$\mathbf 0 = 0$

This allows us to deal with a concept of zero in recursive function theory. In the literature (in [Mon:MathLog], too) this aim is achieved in another way: a

$\mathbf{zero} : \left\lfloor1\right\rfloor$
$\mathbf{zero}\;x = 0$

is defined instead. Is this approach superfluous? Can we avoid it and use the more simple and indirect looking

$\mathbf 0 : \left\lfloor0\right\rfloor$
$\mathbf 0 = 0$

approach?

Are these approaches equivalent? Is the latter (more simple looking) one as powerful as the former one? Could we define a $\mathbf{zero}$ using the

$\mathbf 0 : \left\lfloor0\right\rfloor$
$\mathbf 0 = 0$

approach? Let us try:

$\mathbf{zero} = \underline\mathbf K^0_1 \mathbf0$

(see the definition of $\underline\mathbf K^m_n$ somewhat below). This looks like working, but raises new questions: what about generalizing operations (here: composition) to deal with zero-arity cases in an appropriate way? E.g.

$\underline\mathbf\dot K^0_n c\left\langle\right\rangle$
$\underline\mathbf K^0_n c$

where $c : \left\lfloor0\right\rfloor, n \in \mathbb N$ can be regarded as n-ary functions throwing all their n arguments away and returning c.

Does it take a generalization to allow such cases, or can it be inferred? A practical approach to slve such questions: let us write a Haskell program which implements (at least partially) recursive function theory. Then we can see clearly which things have to be defined and things which are consequences. I think the $\underline\mathbf K^0_n c$ construct is a rather straighforward thing.

Why all this can be important: it may be exactly $\underline\mathbf K^0_n c$ that saves us from defining the concept of zero in recursive function theory as

$\mathbf{zero} : \left\lfloor1\right\rfloor$
$\mathbf{zero}\;x = 0$

-- it may be superfluous: if we need functions that throw away (some or all) of their arguments and return a constant, then we can combine them from $\underline\mathbf K^m_n$, $\mathbf s$ and $\mathbf 0$, if we allow concepts like $\underline\mathbf K^0_m$.

#### 3.2.2 Successor function

$\mathbf s : \left\lfloor1\right\rfloor$
$\mathbf s = \lambda x . x + 1$

#### 3.2.3 Projection functions

For all $0\leq i:

$\mathbf U^m_i : \left\lfloor m\right\rfloor$
$\mathbf U^m_i x_0\dots x_i \dots x_{m-1} = x_i$

### 3.3 Operations

#### 3.3.1 Composition

$\underline\mathbf\dot K^m_n : \left\lfloor m\right\rfloor \times \left\lfloor n\right\rfloor^m \to \left\lfloor n\right\rfloor$
$\underline\mathbf\dot K^m_n f \left\langle g_0,\dots, g_{m-1}\right\rangle x_0 \dots x_{n-1} = f \left(g_0 x_0 \dots x_{n-1}\right) \dots \left(g_{m-1} x_0 \dots x_{n-1}\right)$

This resembles to the $\mathbf\Phi^n_m$ combinator of Combinatory logic (as described in [HasFeyCr:CombLog1, 171]). If we prefer avoiding the notion of the nested tuple, and use a more homogenous style (somewhat resembling to currying):

$\begin{matrix}\underline\mathbf K^m_n : \left\lfloor m\right\rfloor \times \underbrace{\left\lfloor n\right\rfloor\times\dots\times\left\lfloor n\right\rfloor} \to \left\lfloor n\right\rfloor\\\;\;\;\;\;\;\;\;\;\;m\end{matrix}$

Let underbrace not mislead us -- it does not mean any bracing.

$\underline\mathbf K^m_n f g_0\dots g_{m-1} x_0 \dots x_{n-1} = f \left(g_0 x_0 \dots x_{n-1}\right) \dots \left(g_{m-1} x_0 \dots x_{n-1}\right)$

remembering us to

$\underline\mathbf K^m_n f g_0\dots g_{m-1} x_0 \dots x_{n-1} = \mathbf \Phi^n_m f g_0 \dots g_{m-1} x_0 \dots x_{n-1}$

#### 3.3.2 Primitive recursion

$\underline\mathbf R^m : \left\lfloor m\right\rfloor \times \left\lfloor m+2\right\rfloor \to \left\lfloor m+1\right\rfloor$
$\underline\mathbf R^m f h = g\;\mathrm{where}$
$g x_0 \dots x_{m-1} 0 = f x_0 \dots x_{m-1}$
$g x_0 \dots x_{m-1} \left(\mathbf s y\right) = h x_0 \dots x_{m-1} y \left(g x_0\dots x_{m-1} y\right)$

The last equation resembles to the $\mathbf S_n$ combinator of Combinatory logic (as described in [HasFeyCr:CombLog1, 169]):

$g x_0 \dots x_{m-1} \left(\mathbf s y\right) = \mathbf S_{m+1} h g x_0 \dots x_{m-1} y$

## 4 General recursive functions

Everything seen above, and the new concepts:

### 4.1 Type system

$\widehat{\,m\,} = \left\{ f : \left\lfloor m+1\right\rfloor\;\vert\;f \mathrm{\ is\ special}\right\}$

See the definition of being special [Mon:MathLog, 45]. This property ensures, that minimalization does not lead us out of the world of total functions. Its definition is the rather straightforward formalization of this expectation.

$\mathbf{special}^m f \equiv \forall x_0, \dots, x_{m-1} \in \mathbb N\;\;\exists y \in \mathbb N\;\;f x_0 \dots x_{m-1} y = 0$

It resembles to the concept of inverse -- more exactly, to the existence part.

### 4.2 Operations

#### 4.2.1 Minimalization

$\underline\mu^m : \widehat m \to \left\lfloor m\right\rfloor$
$\underline\mu^m f x_0 \dots x_{m-1} = \min \left\{y\in\mathbb N\;\vert\;f x_0 \dots x_{m-1} y = 0\right\}$

Minimalization does not lead us out of the word of total functions, if we use it only for special functions -- the property of being special is defined exactly for this purpose [Mon:MatLog, 45]. As we can see, minimalization is a concept resembling somehow to the concept of inverse.

Existence of the required minimum value of the set -- a sufficient and satisfactory condition for this is that the set is never empty. And this is equivalent to the statement

$\forall x_0, \dots, x_{m-1} \in \mathbb N\;\;\exists y \in \mathbb N\;\;f x_0 \dots x_{m-1} y = 0$

## 5 Partial recursive functions

Everything seen above, but new constructs are provided, too.

### 5.1 Type system

$\begin{matrix}\left\lceil n + 1\right\rceil = \underbrace{\mathbb N\times\dots\times\mathbb N}\supset\!\to\mathbb N\\\;\;\;\;\;\;n+1\end{matrix}$

Question: is there any sense to define $\left\lceil0\right\rceil$ in another way than simply $\left\lceil0\right\rceil = \left\lfloor0\right\rfloor = \mathbb N$? Partial constant? Is

$\left\lceil0\right\rceil = \mathbb N$

or

$\left\lceil0\right\rceil = \mathbb N\;\cup\;\left\{\bot\right\}$?

### 5.2 Operations

$\overline\mathbf\dot K^m_n : \left\lceil m\right\rceil \times \left\lceil n\right\rceil^m \to \left\lceil n\right\rceil$
$\overline\mathbf R^m : \left\lceil m\right\rceil \times \left\lceil m+2\right\rceil \to \left\lceil m+1\right\rceil$
$\overline\mu^m : \left\lceil m+1\right\rceil \to \left\lceil m\right\rceil$

Their definitions are straightforward extension of the corresponding total function based definitions.

Remark: these operations take partial functions as arguments, but they are total operation themselves in the sense that they always yield a result -- at worst an empty function.

## 6 Bibliography

[HasFeyCr:CombLog1]
Curry, Haskell B; Feys, Robert; Craig, William: Combinatory Logic. Volume I. North-Holland Publishing Company, Amsterdam, 1958.
[Mon:MathLog]
Monk, J. Donald: Mathematical Logic. Springer-Verlag, New York * Heidelberg * Berlin, 1976.