Home Easy Foundations for Programming Languages III — The Language PCF & Its Syntax
Post
Cancel

Easy Foundations for Programming Languages III — The Language PCF & Its Syntax

In this article, we will present a language for Programming Computable Functions called PCF. This is a typed functional language based on lambda calculus, and is originally formulated by Dana Scott in this manuscript in 1993. The language is designed to be easily analyzed, rather than as a practical language to be used in normal days.

Our main goals are to develop a feel for the programming capabilities of lambda calculus-based languages and summarize general properties and techniques that apply to a variety of languages. Later, we will use lambda calculus to prove both positive and negative results about the expressiveness of PCF.

Object Language and Meta-Language

When we study a language, we need to distinguish the language we study from the language we use to describe this language and its meaning.

The language we study is traditionally called object language, since this is the object of our attention. The language we use to describe the object language is called meta-language, because it transcends the object language in some way.

During our study of PCF, symbols and expressions of PDF belong to the object language because PCF is the object of study. Besides, it is convenient to use additional symbols to stand for arbitrary symbols and expressions of the object language. These are said to be symbols of the meta-language, the language we use for the study purpose.

By tradition, we use letters $x,y,z,…$ (including $x’, x_1$) as meta-variables for arbitrary variables of PCF, and letters $M,N,P,…$ (including $M’, M_1$) as meta-variables for expressions of PCF.

It is possible for two meta-variables, say $x$ and $y$, to stand for the same object variable. We use the symbol $\equiv$ for syntactic equality of object expressions. Namely, we write $x \equiv y$ if $x$ and $y$ stand for the same variable of PCF and $x \not\equiv y$ if they are distinct variables.

Basic Syntax of PCF

Types Overview

We start the introduction of PCF from listing all the types of this language.

Every expression of PCF has a unique type. The basic values are natural numbers and booleans (truth value $true$ and $false$), which have types $nat$ and $bool$, respectively.

PCF also has pairs, belonging to cartesian product types, and functions, belonging to function types. The PCF notation for the cartesian product of types $\sigma$ and $\tau$ is $\sigma \times \tau$. For example, the type of natural number pairs is written $nat \times nat$. The type of a function with domain $\sigma$ and range $\tau$ is written $\sigma \rightarrow \tau$.

One property of PCF is that only expressions that satisfy certain typing constraints are actually considered part of the language. For example, although PCF has addition, the expression $true + 1$ is not considered well formed. (It does not make sense to add a truth value to a natural number!) With variables, the typing conditions depend on the context in which an expression is used. For example, $x+5$ only makes sense if the variable $x$ is declared to have type $nat$.

Booleans and Natural Numbers

The following shows the basic grammar for boolean and natural number expressions.

\[\begin{split} \langle bool\_exp \rangle ::= \: &\langle bool\_var\rangle \mid true \mid false \mid Eq ? \: \langle nat\_exp \rangle \: \langle nat\_exp \rangle \mid \\ &\text{if } \langle bool\_exp \rangle \text{ then } \langle bool\_exp \rangle \text{ else } \langle bool\_exp \rangle \\ \\ \langle nat\_exp \rangle ::= \: &\langle nat\_var\rangle \mid 0 \mid 1 \mid 2 \mid ... \mid \langle nat\_exp \rangle + \langle nat\_exp \rangle \mid \\ &\text{if } \langle bool\_exp \rangle \text{ then } \langle nat\_exp \rangle \text{ else } \langle nat\_exp \rangle \\ \end{split}\]

The rule for boolean expressions provides four possible replacements:

  1. a variable of type $bool$;
  2. the constants $true$ and $false$;
  3. the equality test $Eq?$, which returns $true$ when two numbers are equal;
  4. and the boolean-valued if-then-else condition expression.

The rule for natural number expressions shares the similar structure with boolean expressions.

The equational axioms for natural number and boolean expressions are straightforward. We have an infinite collection of basic axioms for addition:

\[0+0=0, 0+1=1,..., 1+0=1, 1+1=2,...\]

And we have two axiom schemes for conditional expressions of each type:

\[\begin{split} (\text{if } true \text{ then } M \text{ else } N) = M \\ (\text{if } false \text{ then } M \text{ else } N) = N \end{split}\]

There are infinitely many axioms for equality test, determined as follows:

\[\begin{split} &(Eq ? \: n \: n) = true, \quad \quad \text{each numeral } n,\\ &(Eq ? \: m \: n) = false, \quad \quad m,n \text{ distinct numerals,}\\ \end{split}\]

The operational semantics of natural number and boolean expressions are defined using a set of reduction axioms. They are obtained by reading one of the equational axioms from left to right. In lambda calculus, these are called reduction rules.

In the operational semantics (reduction system), we may evaluate an expression by applying reduction axioms to any subexpression. To illustrate, let’s take a look at such an expression:

\[\text{if } Eq ? \: (6+5) \: 17 \text{ then } (1+1) \text{ else } 27\]

We begin the evaluation by applying the reduction rules $6+5 \rightarrow 11$ and $1+1 \rightarrow 2$. This gives us the expression

\[\text{if } Eq ? \: 11 \: 17 \text{ then } 2 \text{ else } 27\]

which is simplified using a reduction rule for $Eq?$ to

\[\text{if } false \text{ then } 2 \text{ else } 27\]

Finally, one of the rules for conditional applies, and we produce the numeral 27.

Pairing

In PCF, we can form ordered pairs and functions of any type.

If $M$ and $N$ are any PCF expressions, then $\langle M,N \rangle$ is an ordered pair. If $M$ has type $\sigma$ and $N$ has type $\tau$, then the pair $\langle M,N \rangle$ has type $\sigma \times \tau$.

In addition to forming pairs, we can also “take pairs apart” using projection operations. The projection operations $\mathbf{Proj}_1$ and $\mathbf{Proj}_2$ return the first and second components of a pair. This is formalized in the two equational axioms

\[\mathbf{Proj}_1 \langle M,N \rangle = M, \quad \mathbf{Proj}_2 \langle M,N \rangle = N \tag*{($proj$)}\]

Intuitively, we can form the pair again using its components returned by the projection operation. This gives us another equational axiom about a pair $P$ called surjective pairing:

\[\langle \mathbf{Proj}_1 P, \mathbf{Proj}_2 P \rangle = P \tag*{($sp$)}\]

Functions

PCF functions are written using lambda abstraction. Most of the lambda expression examples given in Easy Foundations for Programming Languages I — Introduction are PCF expressions.

For example, \(\lambda x:nat.\:5\) is an acceptable PCF expression, with type $nat \rightarrow nat$.

In general, a PCF function may have any PCF type as domain or range, so we can have functions that take functions as arguments or return a function as the result. Such functions are called higher-order functions (HOF).

higher-order functions take functions as input or output. All other functions are called first-order functions. More to see this wiki page.

A simple higher-order function is a composition of numeric functions

\[comp \stackrel{\text{def}}{=} \lambda f: nat \rightarrow nat. \: \lambda g: nat \rightarrow nat.\: \lambda x: nat . \: f(gx)\]

To see how this works, suppose $y$ and $z$ are numeric functions of type $nat \rightarrow nat$. Then \(comp\:\: y \: z\) is the function \(\lambda x: nat . \: y(zx)\); the function which, on argument $x$, returns $y(zx)$. Thus \(comp\:\: y \: z\) defines $y \circ z $.

$y \circ z $ represents the composition of two functions, which is equivalent to z(y(x)).

For substitution, we have the equational axioms as we described in a previous article:

\[\lambda x: \sigma.\:M = \lambda y: \:\sigma.[y/x]M, \quad y \text{ not free in }M \tag*{($\alpha$)}\] \[(\lambda x: \sigma.\:M)N = [N/x]M \tag*{($\beta$)}\]

Moreover, substitution is extended to PCF by adding cases for addition, conditional, etc. These are all straightforward — we substitute $[N/x]$ in a compound expression by applying this substitution to each of its parts. For example, as of addition, we have

\[[N/x](P+Q) = [N/x]P+[N/x]Q\]

The final equational axiom for functions is based on this idea: two functions are equal if they produce equal value on all arguments. This can be formalized into the $(\eta)$ axiom

\[\lambda x: \sigma.\:Mx = M, \quad x\text{ not free in }M \tag*{($\eta$)}\]

Currying

A traditional topic in lambda calculus is the relationship between multi-argument and higher-order functions.

If we have a mathematical function of two arguments, such as the addition function $add(x,y:nat)=x+y$, then we generally think of this as a function on ordered pairs $add:(nat \times nat) \rightarrow nat$. This type of addition function may be the PCF expression

\[add = \lambda p :nat \times nat. \: (\mathbf{Proj}_1p) + (\mathbf{Proj}_2p).\]

In other words, this $add$ is the function which takes a pair of natural numbers, and returns the sum of its two components.

We can also represent this multi-argument addition function in the “curried” style, introduced by lambda calculus pioneer Haskell Curry, as follows.

\[Curry(add) = \lambda x:nat .\: \lambda y:nat. \: x+y\]

This curried addition function yields $x+y$ when applied to $x$ and $y$, but the form of parameterization is different.

Like many other lambda expressions, it is easier to read $Curry(add)$ from right to left. To help readers to better understand, we add parentheses for it.

\[Curry(add) = \lambda x:nat .(\: \lambda y:nat. \: (x+y))\]

First, let’s look at the body of the outer function, i.e., \(\lambda y:nat. \: (x+y)\). This function adds $x$, whose value is not determined within this expression, to the function argument $y$. So we might call this “add x” function. It has type $nat \rightarrow nat$. Namely,

\[\lambda y:nat. \: (x+y):nat \rightarrow nat.\]

Now let’s look back at the whole function $Curry(add)$. On argument $x$, the function $Curry(add)$ returns the “add x” function. So we can know the type of the function

\[Curry(add): nat \rightarrow ( nat \rightarrow nat).\]

This type indicates that $Curry(add)$ takes two natural number arguments one at a time, rather than all at once.

As suggested by the notation $Curry(add)$, there is a higher-order transformation $Curry$ which maps any natural function $f$ to its curried form $Curry(f)$. This map can actually be written in PCF, as the lambda expression

\[Curry = \lambda f:(nat \times nat) \rightarrow nat. \:\lambda x:nat.\: \lambda y:nat.\: f\langle x,y \rangle.\]

Using the axiom of $\beta$-equivalence, we can examine that when applied to $add$, the function $Curry$ produces the lambda expression written above.
\(\begin{split} Curry(add) &= (\lambda f:(nat \times nat) \rightarrow nat. \:\lambda x:nat.\: \lambda y:nat.\: f\langle x,y \rangle)\: add\\ &= \lambda x:nat .\:\lambda y:nat.\: add\langle x,y \rangle \\ &= \lambda x:nat .\:\lambda y:nat.\:(lambda p:nat \times nat .\: (\mathbf{Proj}_1p) + (\mathbf{Proj}_2p)) \langle x,y \rangle \\ &= \lambda x:nat .\:\lambda y:nat.\:\mathbf{Proj}_1\langle x,y \rangle + \mathbf{Proj}_2\langle x,y \rangle \\ &= \lambda x:nat .\:\lambda y:nat.\: x+y \end{split}\)

Advanced Syntax of PCF

In this section, we will introduce more syntax of PCF that can be treated as syntactic sugar.

Syntactic sugar is syntax within a programming language that is designed to make things easier to read or to express. It makes the language “sweeter” for human use: things can be expressed more clearly, more concisely, or in an alternative style that some may prefer.
For example, in the C language, the array index notation a[i] is syntactic sugar for the pointer operation *(a+i). Likewise, the a->x notation is syntax sugar for the dereference operator (*a).x.

Declarations

One of the appeals of lambda calculus, as a model programming language, is the way that lambda abstraction corresponds to variable binding.

To illustrate the variable binding, we select the language OCaml, whose grammar is similar to lambda notation, as our example. The following OCaml snippet prints the sum of two integer variables.

1
2
3
4
5
let x = 10 in (
  let y = 2 in (
    print_int (x * y)
  )
)

OCaml uses the keyword let to bind the variable to a value , and the keyword in to specify the binding scope. A name defined via a let is defined only within the attached expression after in. In the above snippet, the binding x=10 is only valid inside the expression let y = 2 in ( print_int(x * y) ) and the binding y=2 is valid inside print_int(x * y).

Similar to OCaml, the declaration syntax in PCF is

\[\text{let } x:\sigma = M \text{ in } N,\]

which binds $x$ to $M$ within the declaration body $N$. In other words, the value of $ \text{let } x:\sigma = M \text{ in } N$ is the value of $N$ with $x$ set to $N$. We consider this well-typed only if $M$ has type $\sigma$.

Instead of adding $\text{let}$ declarations to PCF directly, we treat $\text{let}$ as an abbreviation, i.e., syntactic sugar. This means that we will feel free to use $\text{let}$ in writing PCF expressions, but we do not consider $\text{let}$ part of the actual syntax of PCF.

Concretely speaking, we think of $\text{let}$ as an abbreviation for a lambda expression according to the rule

\[\text{let } x:\sigma = M \text{ in } N \quad \stackrel{\text{def}}{=} \quad (\lambda x : \sigma . N)M.\]

In general, we use $M \stackrel{\text{def}}{=} N$ to mean that whenever we write a term of the form $M$, this is shorthand for a term of form $N$.

Syntax sugar is very useful in programming language analysis. By considering certain constructs as syntax sugar for others, we can write program examples in a familiar notation, and analyze these programs as if they are written using simpler or less troubling primitives.

Recursion and Fixed-point Operators

The final construct of PCF defines by recursion. Rather than add a new declaration form, we will treat recursive declarations as a combination of $\text{let}$ and one new basic function, a fixed-point operator $fix_{\sigma}$ for each type $\sigma$.

Recursion makes it possible to write expressions with no normal form. This changes our basic intuition about the way that an expression defines a mathematical value. Consider the following C program, which declares a nonterminating recursion function and cannot be evaluated to a concrete value.

1
2
3
4
int rec(int x) {
    return rec(x + 1);
}
rec(0);

In addition, after adding recursion into consideration, when there are several possible reductions that could be applied to a term, the choice between these becomes important.

We will see how fixed-point operator provide recursion by beginning with the declaration form

\[\text{letrec } f:\sigma = M \text{ in } N.\]

This means that within $N$, the variable $f$ denotes a solution to the equation $f = M$. In general, $f$ may occur in $M$. (Of course, $M$ must have type $\sigma$, otherwise the equation $f = M$ does not make sense.)

The keyword $\text{letrec}$ is a syntax sugar for a combination of ordinary $\text{let}$ and a fixed-point operator.

To show how $\text{letrec}$ works, we use it to define a factorial function and compute $5!$ as follows:

\[\text{letrec } f:nat \rightarrow nat = \lambda y:nat .\: (\text{if } Eq?\:y \: 0 \text{ then } 1 \text{ else } y*f(y-1)) \text{ in } f \: 5\]

For easier understanding, we provide equivalent programs written in C and Ocaml.

1
2
3
4
5
int f(int y) {
    if (y == 0) return 1;
    else return y * f(y - 1);
}
f(5);
1
2
3
4
5
6
7
let rec f y =
  if y == 0 then
    1
  else
    y * f (y - 1)
in
  f 5

You may find the syntax of Ocaml is close to PCF — Ocaml also explicitly use a keyword rec in the declaration of recursive functions.

We cannot simply take $f$ to refer to the expression on the right-hand side of the equals sign. Instead, $f$ must be a solution to the equation

\[f = \lambda y:nat .\: (\text{if } Eq?\:y \: 0 \text{ then } 1 \text{ else } y*f(y-1))\]

with two occurrences of the function variable $f$ on both left and right.

From a mathematical point of view, it is not clear that every equation $f=M$ should have a solution, or which solution to choose if several exist. However, recursive function declarations have a clear computational interpretation. Therefore, we will assume that every defining equation has some solution and add syntax to PCF for expressing this.

Using lambda abstraction, we can convert any recursive definition $f:\sigma = M$ into a function $\lambda f :\sigma.M$. Rather than directly looking for a solution to a recursive defining equation, we will produce a fixed point of its associated function.

In general, if $F : \sigma \rightarrow \sigma$ is a function from some type to itself, a fixed point of $F$ is a value $x:\sigma$ such that $F(x) = x$.

Here we can finally introduce the last basic construct of PCF, which is a family of functions

\[fix_{\sigma}: (\sigma \rightarrow \sigma) \rightarrow \sigma ,\]

one for each type $\sigma$. The function $fix_{\sigma}$ is intended to produce a fixed point of any function from $\sigma$ to $\sigma$. For instance, \(fix_{\sigma} \: F\) is the fixed point for the function $F:\sigma \rightarrow \sigma$, so we have \(F\:(fix_{\sigma} \: F) = fix_{\sigma} \: F\).

Using lambda abstraction and $fix_{\sigma}$, we can regard the recursive $\text{letrec}$ declaration form as an abbreviation:

\[\text{letrec } f:\sigma = M \text{ in } N \quad \stackrel{\text{def}}{=} \quad \text{let } f:\sigma = (fix_{\sigma} \: \lambda f:\sigma.M) \text{ in } N.\]

Since we often use $\text{letrec}$ to define functions, we also adopt the syntactic sugar

\[\text{letrec } f(x:\tau):\sigma = M \text{ in } N \quad \stackrel{\text{def}}{=} \quad \text{letrec } f: \tau \rightarrow \sigma = \lambda x:\tau .\:M \text{ in } N.\]

Based on the definition of fixed point — “a fixed point of $F$ is a value $x:\sigma$ such that $F(x) = x$”, it is easy to derive the equation axiom for $fix_{\sigma}$

\[M \: (fix_{\sigma} \: M) = fix_{\sigma} \: M , \tag*{($fix$)}\]

where $M$ is an arbitrary function with type $\sigma \rightarrow \sigma$.

In this equation, the expression $fix_{\sigma} : M$ corresponds to the “fixed point $x$” in our definition.

To see how $(fix)$ works, let’s continue the factorial example. Using $fix_{nat \rightarrow nat}$, the factorial function may be written $fact \stackrel{\text{def}}{=} fix_{nat \rightarrow nat} F$, where $F$ is written out below.

\[F \: \stackrel{\text{def}}{=} \: \lambda f:nat \rightarrow nat .\: \lambda y:nat .\: (\text{if } Eq?\:y \: 0 \text{ then } 1 \text{ else } y*f\:(y-1))\]

$F$ is constructed from converting the recursive definition form $f:\sigma = M$ into the corresponding function form $\lambda f :\sigma.M$, as we stated before introducing the definition of fixed point.

To simplify the writing, we will drop the subscript of $fix$ in the calculation below. To compute \(fact \: n\), we may expand the definition, and use reduction to obtain the following:

\[\begin{split} fact \: n \: &\equiv \: (fix \: F)\:n\\ & \twoheadrightarrow \: F \: (fix \: F)\:n\\ &\equiv \: (\lambda f:nat \rightarrow nat = \lambda y:nat .\: (\text{if } Eq?\:y \: 0 \text{ then } 1 \text{ else } y*f\:(y-1)))\: (fix \: F)\:n\\ &\twoheadrightarrow \: \text{if } Eq?\:y \: 0 \text{ then } 1 \text{ else } n*(fix \: F)\:(n-1)\\ \end{split}\]

When $n=0$, we can use the axiom for conditional to simplify \(fact \: 0\) to $1$. For $n > 0$, we can simplify the test to obtain \(n*(fix \: F)(n-1)\) and continue as above. For any natural number $n$, it is clear that we will eventually compute \(fact \: n = n!\).

This post is licensed under CC BY 4.0 by the author.

Randomized Algorithm X — DNF & Monte Carlo Method

Randomized Algorithm XI — 2-SAT & Markov Chain