
On Sun, Jun 21, 2009 at 05:53:04PM +0100, Andrew Coppin wrote:
OK, so I'm guessing there might be one or two (!) people around here who know something about the Lambda calculus.
I've written a simple interpretter that takes any valid Lambda expression and performs as many beta reductions as possible. When the input is first received, all the variables are renamed to be unique.
Question: Does this guarantee that the reduction sequence will never contain name collisions?
I have a sinking feeling that it does not. However, I can find no counter-example as yet. If somebody here can provide either a proof or a counter-example, that would be helpful.
Lambda calculus is one of those things that seems simple in theory (because it IS simple in theory) but can be tricky to implement correctly. I recommend reading the first few chapters of Benjamin Pierce's "Types and Programming Languages" for a good discussion of the issues involved (including explanations and implementations of both a "named" style, and a "de Bruijn index" style), and then also "I am not a number, I am a free variable" by McBride and McKinna, which explains a "locally nameless" style which mixes the best of both worlds. -Brent