In mathematical logic and theoretical computer science, a rewrite system has the (strong) normalization property or is terminating if every term is strongly normalizing; that is, if every sequence of rewrites eventually terminates with an irreducible term, also called a normal form. A rewrite system may also have the weak normalization property, meaning that for every term, there exists at least one particular sequence of rewrites that eventually yields a normal form, i.e., an irreducible term.
The pure untyped lambda calculus does not satisfy the strong normalization property, and not even the weak normalization property. Consider the term [math]\displaystyle{ \lambda x . x x x }[/math]. It has the following rewrite rule: For any term [math]\displaystyle{ t }[/math],
But consider what happens when we apply [math]\displaystyle{ \lambda x . x x x }[/math] to itself:
Therefore the term [math]\displaystyle{ (\lambda x . x x x) (\lambda x . x x x) }[/math] is neither strongly nor weakly normalizing.
Various systems of typed lambda calculus including the simply typed lambda calculus, Jean-Yves Girard's System F, and Thierry Coquand's calculus of constructions are strongly normalizing.
A lambda calculus system with the normalization property can be viewed as a programming language with the property that every program terminates. Although this is a very useful property, it has a drawback: a programming language with the normalization property cannot be Turing complete.[1] That means that there are computable functions that cannot be defined in the simply typed lambda calculus (and similarly there are computable functions that cannot be computed in the calculus of constructions or System F).
As an example, it is impossible to define a self-interpreter in any of the calculi cited above.[2][3]
Here, by "self-interpreter" we mean a program that takes a source term representation in some plain format (such as a string of characters) and returns a representation of the corresponding normalized term. This impossibility result does not hold for other definitions of "self-interpreter". For example, some authors have referred to functions of type [math]\displaystyle{ \pi\,\tau \to \tau }[/math] as self-interpreters, where [math]\displaystyle{ \pi\,\tau }[/math] is the type of representations of [math]\displaystyle{ \tau }[/math]-typed terms. To avoid confusion, we will refer to these functions as self-recognizers. Brown and Palsberg showed that self-recognizers could be defined in several strongly-normalizing languages, including System F and System Fω.[4] This turned out to be possible because the types of encoded terms being reflected in the types of their representations prevents constructing a diagonal argument. In their paper, Brown and Palsberg claim to disprove the "conventional wisdom" that self-interpretation is impossible (and they refer to this very Wikipedia page as an example of the conventional wisdom), but what they actually disprove is the impossibility of self-recognizers, a completely different concept. In their follow-up work, they switch to the more specific "self-recognizer" terminology we use here, notably distinguishing these from "self-evaluators", of type [math]\displaystyle{ \pi\,\tau \to \pi\,\tau }[/math].[5] They also recognize that implementing self-evaluation seems harder than self-recognition, and leave the implementation of the former in a strongly-normalizing language as an open problem.