Detecting Cycles in Datastructures

In a pure language, is it possible to detect cycles in recursive data structures? For example, is it possible to determine that "cyclic" has a loop? ... data Expr = Constant Int | Addition Expr Expr cyclic :: Expr cyclic = Addition (Constant 1) cyclic Or phased differently, is it possible to make "Expr" an instance of "Eq" such that cyclic == cyclic is smart enough to avoid a recursive decent? -Tom

Tom Hawkins wrote:
In a pure language, is it possible to detect cycles in recursive data structures? For example, is it possible to determine that "cyclic" has a loop? ...
data Expr = Constant Int | Addition Expr Expr
cyclic :: Expr cyclic = Addition (Constant 1) cyclic
Or phased differently, is it possible to make "Expr" an instance of "Eq" such that cyclic == cyclic is smart enough to avoid a recursive decent?
No. And there is nothing that says that your definition of cyclic will actually have a cycle in the implementation. -- Lennart

Lennart Augustsson wrote:
Tom Hawkins wrote:
In a pure language, is it possible to detect cycles in recursive data structures? For example, is it possible to determine that "cyclic" has a loop? ...
data Expr = Constant Int | Addition Expr Expr
cyclic :: Expr cyclic = Addition (Constant 1) cyclic
Or phased differently, is it possible to make "Expr" an instance of "Eq" such that cyclic == cyclic is smart enough to avoid a recursive decent?
No.
Bummer -- but as I suspected.
And there is nothing that says that your definition of cyclic will actually have a cycle in the implementation.
Would you elaborate? Are you referring to the possibility that "cyclic", or at least the second Addition operand, may not be evaluated? Thanks! -Tom
-- Lennart

Hello,
On 10/27/05, Tom Hawkins
...
data Expr = Constant Int | Addition Expr Expr
cyclic :: Expr cyclic = Addition (Constant 1) cyclic ... And there is nothing that says that your definition of cyclic will actually have a cycle in the implementation.
Would you elaborate? Are you referring to the possibility that "cyclic", or at least the second Addition operand, may not be evaluated?
An implementation that does not do updating of values (i.e. uses "call by name" instead of "call by need") will not generate a cyclic structure; instead every time you ask for the value of "cyclic" it will create a new expression. -Iavor

Tom Hawkins wrote:
Lennart Augustsson wrote:
Tom Hawkins wrote:
In a pure language, is it possible to detect cycles in recursive data structures? For example, is it possible to determine that "cyclic" has a loop? ...
data Expr = Constant Int | Addition Expr Expr
cyclic :: Expr cyclic = Addition (Constant 1) cyclic
Or phased differently, is it possible to make "Expr" an instance of "Eq" such that cyclic == cyclic is smart enough to avoid a recursive decent?
No.
Bummer -- but as I suspected.
And there is nothing that says that your definition of cyclic
will actually have a cycle in the implementation.
Would you elaborate? Are you referring to the possibility that "cyclic", or at least the second Addition operand, may not be evaluated?
No, what I'm reffering to is that the Haskell definitions says very, very little about implementation details. A valid Haskell implementation might use some kind of call-by-name where each use of a name expands to its definition. Which in your case would give an infinite tree for cyclic. Since these kinds of things is actually up to the implementation there is no way you can detect a cycle. Because there might not be one. -- Lennart

On Thu, 27 Oct 2005, Lennart Augustsson
Tom Hawkins wrote:
Or phased differently, is it possible to make "Expr" an instance of "Eq" such that cyclic == cyclic is smart enough to avoid a recursive decent?
No. And there is nothing that says that your definition of cyclic will actually have a cycle in the implementation.
On the other hand, other people have argued that "observable sharing" might be nice to have. See the following paper: Observable Sharing for Functional Circuit Description Koen Claessen, David Sands (Some version of it should be available online.) Adding observable sharing to Haskell would imply the loss of full beta equality. If I remember correctly the authors suggest that we only need the restricted form of beta equality where the argument is a defined value (for some notion of definedness). See the paper for their arguments. -- /NAD

Hello Tom, Thursday, October 27, 2005, 6:46:41 PM, you wrote: TH> In a pure language, is it possible to detect cycles in recursive data TH> structures? no, but it is possible in GHC. Einar Karttunen in his "SerTH - Binary Serialization library for Haskell" used makeStableName and unsafeCoerce# to get number which characterize uniqness of Haskell value -- Best regards, Bulat mailto:bulatz@HotPOP.com

This is a very late response to an old thread... Tom Hawkins wrote:
In a pure language, is it possible to detect cycles in recursive data structures? For example, is it possible to determine that "cyclic" has a loop? ...
data Expr = Constant Int | Addition Expr Expr
cyclic :: Expr cyclic = Addition (Constant 1) cyclic
Or phased differently, is it possible to make "Expr" an instance of "Eq" such that cyclic == cyclic is smart enough to avoid a recursive decent?
-Tom
--- Perhaps it's obvious, but one way to do this is to make the cycle *implicit* via some kind of a fixed-point operator, which is a trick I used recently in a DSL application. For example, you could define:
data Expr = Const Int | Add Expr Expr | Loop -- not exported deriving (Eq, Show)
The purpose of Loop is to "mark" the point where a cycle exists. Instead of working with values of type Expr, you work with values of type Expr -> Expr, or Fix Expr:
type Fix a = a -> a
For example:
fe1,fe2 :: Fix Expr fe1 e = Add (Const 1) (Const 1) -- non-recursive fe2 e = Add (Const 1) e -- recursive
You can always get the value of an Expr from a Fix Expr whenever you need it, by computing the fixed point:
fix f = x where x = f x
e1,e2 :: Expr e1 = fix fe1 -- finite e2 = fix fe2 -- infinite
Note that e2 is equivalent to your "cyclic". But now we can also test for equality:
instance Eq (Fix Expr) where fe1 == fe2 = fe1 Loop == fe2 Loop
For example, fe2 == fe2 returns "True", whereas e2 == e2 (i.e. your cyclic == cyclic) diverges. Of course, note that, although fe1 == fe2 implies that fix fe1 == fix fe2, the reverse is not true, since there will always be loops of varying degrees of unwinding that are semantically equivalent, but not "syntactically", or structurally, equivalent. Thus this definition of equality is only approximate, but still, it's useful. If you want to have multiple loops, or a loop not at the top level, then you need to add some kind of a loop constructor to Expr. I've sketched that idea below, and pointed out a couple of other useful ideas, such as showing a loop, and mapping a function over a loop without unwinding it. I hope this helps, -Paul ----------------------------------------------------------------------
module Cyclic where
Loop now needs an ID because there may be more than one of them.
data Expr = Const Int | Add Expr Expr | Rec (Fix Expr) -- implicit loop | Loop ID -- not exported
type Fix a = a -> a type ID = Int
To check equality of and show Exprs, we need to supply unique IDs to each recursive loop, which we do via a simple counter.
instance Eq Expr where e1 == e2 = let eq (Const x) (Const y) n = x == y eq (Loop i1) (Loop i2) n = i1 == i2 eq (Add e1 e2) (Add e1' e2') n = eq e1 e1' n && eq e2 e2' n eq (Rec fe1) (Rec fe2) n = eq (fe1 (Loop n)) (fe2 (Loop n)) (n+1) eq _ _ n = False in eq e1 e2 0
instance Show Expr where show e = let show' (Const x) n = "(Const " ++ show x ++ ")" show' (Add e1 e2) n = "(Add " ++ show' e1 n ++ " " ++ show' e2 n ++ ")" show' (Loop i) n = "(Loop " ++ show i ++ ")" show' (Rec fe) n = "(Rec " ++ show n ++ " " ++ show' (fe (Loop n)) (n+1) ++ ")" in show' e 0
Unwinding (i.e. computing fixpoints): Note: unwind should never see a Loop constructor.
unwind :: Expr -> Expr unwind (Add e1 e2) = Add (unwind e1) (unwind e2) unwind (Rec fe) = x where x = unwind (fe x) unwind e = e
The 2nd equation above is analogous to: fix f = x where x = f x And these two equations could also be written as: fix f = f (fix f) unwind (Rec fe) = unwind (fe (Rec fe)) Examples:
fe1,fe2 :: Fix Expr fe1 e = Add (Const 1) (Const 1) -- non-recursive fe2 e = Add (Const 1) e -- recursive
e1,e2,e3 :: Expr e1 = Rec fe1 -- no real loop e2 = Rec fe2 -- top-level loop e3 = Add e2 (Const 0) -- lower-level loop e4 = Rec (\e1-> Add (Const 1) (Rec (\e2-> Add e1 e2))) -- nested loop
b1,b2 :: Bool b1 = e3==e3 -- returns True b2 = e3==e2 -- returns False
e1u,e2u,e3u :: Expr e1u = unwind e1 -- finite e2u = unwind e2 -- infinite e3u = unwind e3 -- infinite e4u = unwind e4 -- infinite
Now suppose we define a function, say mapE, that applies a function to the leaves (in this case the Const Int's) of an Expr. For example: mapE succ (Add (Const 1) (Const 2)) => Add (Const 2) (Const 3) Then if we define something like: cyclic1 = Add (Const 1) cyclic1 and do "mapE succ cyclic", we'd like to get: cyclic2 = Add (Const 2) cyclic2 However, most implementations will unwind the loop -- i.e. we don't get the sharing implied by the explicit loop. However, by using Rec to express loops implicitly, we can get around this as follows:
mapE :: (Int->Int) -> Expr -> Expr mapE f e = mapE' f e 0 where mapE' f (Const i) n = Const (f i) mapE' f (Add e1 e2) n = Add (mapE' f e1 n) (mapE' f e2 n) mapE' f (Rec fe) n = Rec (absLoop n (mapE' f (fe (Loop n)) (n+1))) mapE' f (Loop i) n = Loop i
absLoop n e turns e :: Expr into a Fix Expr by "abstracting out" Loop n:
absLoop :: Int -> Expr -> Fix Expr absLoop n e = \e' -> let abs (Loop n') | n==n' = e' abs (Add e1 e2) = Add (abs e1) (abs e2) abs e = e in abs e
Examples:
e5 = mapE succ e2 e6 = Rec (\e -> Add (Const 2) e) b3 = e5==e6 -- returns True!
e7 = mapE succ e4 e8 = Rec (\e1-> Add (Const 2) (Rec (\e2-> Add e1 e2))) b4 = e7==e8 -- returns True!

Henning Thielemann wrote:
On Fri, 18 Nov 2005, Paul Hudak wrote:
For example:
fe1,fe2 :: Fix Expr fe1 e = Add (Const 1) (Const 1) -- non-recursive fe2 e = Add (Const 1) e -- recursive
Do you mean fe1 _ = Add (Const 1) Loop ?
No, I really meant it as written. I included this example just to point out that an expression didn't have to be infinite to represent it as the fixpoint of a function. Note that the least fixpoint of fe1 is "Add (Const 1) (Const 1)". Loop shouldn't ever be used by the user -- that's why I added the comment that it was "not exported". It's just there to "open" the function for inspection. In this sense, the trick is analogous to the use of higher-order abstract syntax -- i.e. using the meta-language (Haskell) to represent functions in the object language (expressions). -Paul

--- Paul Hudak
This is a very late response to an old thread...
Tom Hawkins wrote:
In a pure language, is it possible to detect cycles in recursive data structures? For example, is it possible to determine that "cyclic" has a loop? ...
data Expr = Constant Int | Addition Expr Expr
cyclic :: Expr cyclic = Addition (Constant 1) cyclic
Or phased differently, is it possible to make "Expr" an instance of "Eq" such that cyclic == cyclic is smart enough to avoid a recursive decent?
I'm a little confused. It's one thing to demonstrate that a (possibly infinite) digraph contains a loop, but quite another to show that it does not. Carrying around a copy of the subgraph consisting of nodes actually visited is workable in a pure language, but there's no guarantee of termination.
-Tom
---
Perhaps it's obvious, but one way to do this is to make the cycle *implicit* via some kind of a fixed-point operator, which is a trick I used recently in a DSL application.
So, you imagine f to be a function that (non-deterministally) follows an edge in a digraph. The idea being that G is (possibly infinite) digraph and F is a subgraph. The "function" f would non-deterministically select a vertext of F, follow an edge in G (again chosen non-deterministically), add the (possibly new) edge to F, and return the resulting graph. Then, you are asking if f has a fixpoint in this broader context?
For example, you could define:
data Expr = Const Int | Add Expr Expr | Loop -- not exported deriving (Eq, Show)
The purpose of Loop is to "mark" the point where a cycle exists. Instead of working with values of type Expr, you work with values of type Expr -> Expr, or Fix Expr:
type Fix a = a -> a
For example:
fe1,fe2 :: Fix Expr fe1 e = Add (Const 1) (Const 1) -- non-recursive fe2 e = Add (Const 1) e -- recursive
You can always get the value of an Expr from a Fix Expr whenever you need it, by computing the fixed point:
fix f = x where x = f x
If it can be computed. Maybe I'm off-track here, but it seems to me that when you introduce laziness into the equation, you lose any guarantee of there even being a fixpoint, much less one that can be computed.
e1,e2 :: Expr e1 = fix fe1 -- finite e2 = fix fe2 -- infinite
Note that e2 is equivalent to your "cyclic". But now we can also test for equality:
instance Eq (Fix Expr) where fe1 == fe2 = fe1 Loop == fe2 Loop
For example, fe2 == fe2 returns "True", whereas e2 == e2 (i.e. your cyclic == cyclic) diverges.
Of course, note that, although fe1 == fe2 implies that fix fe1 == fix fe2, the reverse is not true, since there will always be loops of varying degrees of unwinding that are semantically equivalent, but not "syntactically", or structurally, equivalent. Thus this definition of equality is only approximate, but still, it's useful.
I'm lost. Are you saying bottom is bottom?
If you want to have multiple loops, or a loop not at the top level, then you need to add some kind of a loop constructor to Expr. I've sketched that idea below, and pointed out a couple of other useful ideas, such as showing a loop, and mapping a function over a loop without unwinding it.
I hope this helps,
-Paul
===
Gregory Woodhouse

Greg Woodhouse wrote:
--- Paul Hudak
wrote: Tom Hawkins wrote:
In a pure language, is it possible to detect cycles in recursive data structures? For example, is it possible to determine that "cyclic" has a loop? ...
data Expr = Constant Int | Addition Expr Expr
cyclic :: Expr cyclic = Addition (Constant 1) cyclic
Or phased differently, is it possible to make "Expr" an instance of "Eq" such that cyclic == cyclic is smart enough to avoid a recursive decent?
I'm a little confused. It's one thing to demonstrate that a (possibly infinite) digraph contains a loop, but quite another to show that it does not. Carrying around a copy of the subgraph consisting of nodes actually visited is workable in a pure language, but there's no guarantee of termination.
The question that was originally posted assumed that the graph was represented using direct recursion in Haskell. In which case you cannot detect a cycle, as previous replies pointed out. Of course, if you represented the graph in some more concrete form -- such as an explicit list of nodes and edges -- then you could detect the cycle using a standard graph algorithm, at the expense of losing the elegance of the Haskell recursion. My post was just pointing out that there is a third possibility, one that retains most of the elegance and abstractness of Haskell, yet still allows detecting cycles.
Perhaps it's obvious, but one way to do this is to make the cycle *implicit* via some kind of a fixed-point operator, which is a trick I used recently in a DSL application.
So, you imagine f to be a function that (non-deterministally) follows an edge in a digraph. The idea being that G is (possibly infinite) digraph and F is a subgraph. The "function" f would non-deterministically select a vertext of F, follow an edge in G (again chosen non-deterministically), add the (possibly new) edge to F, and return the resulting graph. Then, you are asking if f has a fixpoint in this broader context?
I don't understand this... and therefore it's probably not what I imagine! I'm saying simply that a cyclic graph can be represented as the fixed point of a function.
For example, you could define:
data Expr = Const Int | Add Expr Expr | Loop -- not exported deriving (Eq, Show)
The purpose of Loop is to "mark" the point where a cycle exists. Instead of working with values of type Expr, you work with values of type Expr -> Expr, or Fix Expr:
type Fix a = a -> a
For example:
fe1,fe2 :: Fix Expr fe1 e = Add (Const 1) (Const 1) -- non-recursive fe2 e = Add (Const 1) e -- recursive
You can always get the value of an Expr from a Fix Expr whenever you need it, by computing the fixed point:
fix f = x where x = f x
If it can be computed. Maybe I'm off-track here, but it seems to me that when you introduce laziness into the equation, you lose any guarantee of there even being a fixpoint, much less one that can be computed.
Actually it's the other way around -- laziness is what makes this work! The least fixpoint of fe2 in a strict language, for example, is bottom.
e1,e2 :: Expr e1 = fix fe1 -- finite e2 = fix fe2 -- infinite
Note that e2 is equivalent to your "cyclic". But now we can also test for equality:
instance Eq (Fix Expr) where fe1 == fe2 = fe1 Loop == fe2 Loop
For example, fe2 == fe2 returns "True", whereas e2 == e2 (i.e. your cyclic == cyclic) diverges.
Of course, note that, although fe1 == fe2 implies that fix fe1 == fix fe2, the reverse is not true, since there will always be loops of varying degrees of unwinding that are semantically equivalent, but not "syntactically", or structurally, equivalent. Thus this definition of equality is only approximate, but still, it's useful.
I'm lost. Are you saying bottom is bottom?
I suspect from your other post that you haven't seen the standard trick of encoding infinite data structures as fixpoints. Suppose you have a lambda calculus term for cons, as well as for the numeral 1. Then the infinite list of ones is just: Y (\ones. cons 1 ones) where Y (aka the "paradoxical combinator" or "fixed point combinator") is defined as: \f. (\x. f (x x)) (\x. f (x x)) To see this, try unfolding the above term a few iterations using normal-order reduction. Note that the term has no recursion in it whatsoever. Now, my earlier point above is that: Y (\ones. cons 1 ones) unwinds to the same term as: Y (\ones. cons 1 (cons 1 ones)) yet the two terms (\ones. cons 1 ones) and (\ones. cons 1 (cons 1 ones)) are not the same (in fact they're not lambda convertible, either). -Paul

--- Paul Hudak
I suspect from your other post that you haven't seen the standard trick of encoding infinite data structures as fixpoints. Suppose you have a lambda calculus term for cons, as well as for the numeral 1. Then the infinite list of ones is just:
Not in quite those terms. Intuitively, I think of the infinite data stucture here as being a kind of a "completion" of the space of ordinary (finite) graphs to a space in which th given function actually does have a fixed point.
Y (\ones. cons 1 ones)
where Y (aka the "paradoxical combinator" or "fixed point combinator") is defined as:
\f. (\x. f (x x)) (\x. f (x x))
Now, this is I have seen, but it frankly strikes me as a "formal trick". I've never felt that this definition of Y gave me much insight into the computation you describe below.
To see this, try unfolding the above term a few iterations using normal-order reduction. Note that the term has no recursion in it whatsoever.
Now, my earlier point above is that:
Y (\ones. cons 1 ones)
unwinds to the same term as:
Y (\ones. cons 1 (cons 1 ones))
yet the two terms (\ones. cons 1 ones) and (\ones. cons 1 (cons 1 ones)) are not the same (in fact they're not lambda convertible, either).
-Paul
And this is what leaves me scatching my head. If you leave off the "ones", then you get a sequence of finite lists [1], [1, 1], ... that seems to approximate the infinite list, but I find it difficult to imagine how you might pluck a formula like \i. 1 (the ith term) out of the air.
===
Gregory Woodhouse

Greg Woodhouse wrote:
--- Paul Hudak
wrote: Y (\ones. cons 1 ones)
where Y (aka the "paradoxical combinator" or "fixed point combinator") is defined as:
\f. (\x. f (x x)) (\x. f (x x))
Now, this is I have seen, but it frankly strikes me as a "formal trick". I've never felt that this definition of Y gave me much insight into the computation you describe below.
To see this, try unfolding the above term a few iterations using normal-order reduction. Note that the term has no recursion in it whatsoever.
Now, my earlier point above is that: Y (\ones. cons 1 ones) unwinds to the same term as: Y (\ones. cons 1 (cons 1 ones))
yet the two terms (\ones. cons 1 ones) and (\ones. cons 1 (cons 1 ones)) are not the same (in fact they're not lambda convertible, either). -Paul
And this is what leaves me scatching my head. If you leave off the "ones", then you get a sequence of finite lists [1], [1, 1], ... that seems to approximate the infinite list, but I find it difficult to imagine how you might pluck a formula like \i. 1 (the ith term) out of the air.
The important property of Y is this: Y f = f (Y f) In this way you can see it as "unwinding" the function, one step at a time. If we define f as (\ones. cons 1 ones), then here is one step of unwinding: Y f ==> f (Y f) ==> cons 1 (Y f) If you do this again you get: cons 1 (cons 1 (Y f)) and so on. As you can see, continuing this process yields the infinite list of ones. Now note that if we define g as (\ones. cons 1 (cons 1 ones)), we get: Y g ==> g (Y g) ==> cons 1 (cons 1 (Y g)) which obviously also yields the infinite list of ones. Yet f is not equal to g. Does this help? -Paul

--- Paul Hudak
The important property of Y is this:
Y f = f (Y f)
Right. This is just a formal statement of the property thaat f fixex Y f. I'm with you so far.
In this way you can see it as "unwinding" the function, one step at a
time. If we define f as (\ones. cons 1 ones), then here is one step of unwinding:
Y f ==> f (Y f) ==> cons 1 (Y f)
I'm with you here, too, except that we are *assuming* that Y has the stated property. If it does, it's absolutely clear that the above should hold.
If you do this again you get:
cons 1 (cons 1 (Y f))
and so on. As you can see, continuing this process yields the infinite list of ones.
Right.
Now note that if we define g as (\ones. cons 1 (cons 1 ones)), we get:
Y g ==> g (Y g) ==> cons 1 (cons 1 (Y g))
Now, this is where things get a little mysterious. Where did g come from? I understand that the "x x" om formal definition of Y is what makes everything work (or, at least I think I do). When I try and think this all through, I wind up with something like this: First of all, I'm after a "list" (named ones) with the property that ones = cons 1 ones. How can I express the existence of such a thing as a fixed point? Well, if it does exist, it must be a fixed point of f = \x. cons 1 x, because f ones ==> (\x. cons 1 x) ones ==> cons 1 ones ==? ones Now, if I write Y = \y. (\x y (x x)) (\x. y x x) then, Y f ==> cons 1 (\x. cons 1 ( x x)) (\x (cons 1) x x) and I'm left to ponder what that might mean. Formally, I can see that this pulls an extra cons 1 out to the left, and so we're led to your g
which obviously also yields the infinite list of ones. Yet f is not equal to g.
To me, all this really seems to be saying is that for any n, there's a lambda expression explicitly involving [1, 1, ..., 1] (n times) that is a "solution" to the above problem, which I interpet (perhaps incorrectly) to mean that *if* there were a "real" list that wedre a solution to the above recurrence, then for any n, there would be an initial sublist consisting of just 1's. Now, I know that lambda expressions are not lists, though lists can be encoded as lambda expressions. For finite lists, this all seem simple enough, but in this case we seem to have a lambda expression that is not a (finite) list, but that can nevertheless be "approximated" in some3 sense by finte lists. But I can't really put my finger on just what that sense might be.
Does this help?
Yes, it does. Very much so.
-Paul
===
Gregory Woodhouse

Greg Woodhouse wrote:
--- Paul Hudak
wrote: The important property of Y is this:
Y f = f (Y f)
Right. This is just a formal statement of the property thaat f fixex Y f. I'm with you so far.
In this way you can see it as "unwinding" the function, one step at a
time. If we define f as (\ones. cons 1 ones), then here is one step of unwinding:
Y f ==> f (Y f) ==> cons 1 (Y f)
I'm with you here, too, except that we are *assuming* that Y has the stated property. If it does, it's absolutely clear that the above should hold.
You can easily verify that Y f = f (Y f) LHS = Y f = (\f. (\x. f (x x)) (\x. f (x x))) f = (\x. f (x x)) (\x. f (x x)) = f ((\x. f (x x) (\x. f (x x))) RHS = f (Y f) = f ((\f. (\x. f (x x)) (\x. f (x x))) f) = f ((\x. f (x x)) (\x. f (x x))) So (\f. (\x. f (x x)) (\x. f (x x))) is a fixpoint combinator (one of infinitely many). -- Lennart -- Lennart

--- Lennart Augustsson
You can easily verify that Y f = f (Y f)
LHS = Y f = (\f. (\x. f (x x)) (\x. f (x x))) f = (\x. f (x x)) (\x. f (x x)) = f ((\x. f (x x) (\x. f (x x)))
RHS = f (Y f) = f ((\f. (\x. f (x x)) (\x. f (x x))) f) = f ((\x. f (x x)) (\x. f (x x)))
So (\f. (\x. f (x x)) (\x. f (x x))) is a fixpoint combinator (one of infinitely many).
-- Lennart
-- Lennart
===
Gregory Woodhouse

Perhaps the issue is that the manipulations below are purely syntactic, But all the computation rules of the lambda calculus are "syntactic" in that sense. When you can prove things by symbol pushing it's
Greg Woodhouse wrote: the easiest way. But as Paul Hudak mentioned, there definitions that are equal, but you need something stronger to prove it, like fix point induction.
and though they work, it is not at all clear how to make contact with other notions of computability. Perhaps it is that
Y = (\f. (\x. f (x x)) (\x. f (x x)))
is a perfect sensible definition, it's still just a recipe for a computation. Maybe I'm thinking too hard, but it reminds me of the mu operator. Primiive recursive functions are nice and constructive, but minimization is basically a "search", there's no guarantee it will work. If I write
g(x) = mu x (f(x) - x)
then I've basically said "look at every x and stop when you find a fixed point". Likewise, given a candidate for f, it's easy to verify that Y f = f (Y f), as you've shown, but can the f be found without some kind of "search"? There is no search with this defintion (as you can see), it's very constructive.
It computes the fix point which you can also define as oo fix f = lub f^i(_|_) i=0 where f^i is f iterated i times. Is that a definition of fixpoint that makes you happier?
Since there are recursive functions that aren't primitive recursive, the answer has to be "no". Where did primitive recursion come from? Y is used to express general recursion.
Finally, you've exhibited a sequence of fixed points, and in this case it's intuitively clear how these correspond to something we might call an infinite list. But is there an interpetration that makes this precise? The notation
ones = cons 1 ones ones = cons 1 (cons 1 ones) ...
The list ones is the least upper bound in the domain of lazy (integer) lists of the following chain _|_, cons 1 _|_, cons 1 (cons 1 _|_), ... How much domain theory have you studied? Maybe that's where you can find the connection you're missing? -- Lennart

--- Lennart Augustsson
It computes the fix point which you can also define as oo fix f = lub f^i(_|_) i=0 where f^i is f iterated i times. Is that a definition of fixpoint that makes you happier?
Believe it or not, yes. My background is in mathematics, so something looking more like a fixed point of a continuous map seems a lot more intuitive.
How much domain theory have you studied? Maybe that's where you can find the connection you're missing?
None. But it seems clear that this is a deficit in my education I need to address.
-- Lennart
===
Gregory Woodhouse

On 18/11/05, Greg Woodhouse
--- Lennart Augustsson
wrote: I guess I'm not doing a very good job of expressing myself. I see that if you define Y as you do, then the various functions you list have the property that Y f = f (Y f).
I don't want to draw out a boring discussion that is basically about my own qualms here, especially since I haven't really been able to articulate what it is that bothers me.
Perhaps the issue is that the manipulations below are purely syntactic, and though they work, it is not at all clear how to make contact with other notions of computability. Perhaps it is that
Y = (\f. (\x. f (x x)) (\x. f (x x)))
In a sense, the real definition of Y is Y f = f (Y f), this lambda term just happens to have that property, but such functions aren't rare. One fun one is: Y = (L L L L L L L L L L L L L L L L L L L L L L L L L L L L) where L = λabcdefghijklmnopqstuvwxyzr. (r (t h i s i s a f i x e d p o i n t c o m b i n a t o r))
is a perfect sensible definition, it's still just a recipe for a computation. Maybe I'm thinking too hard, but it reminds me of the mu operator. Primiive recursive functions are nice and constructive, but minimization is basically a "search", there's no guarantee it will work. If I write
g(x) = mu x (f(x) - x)
then I've basically said "look at every x and stop when you find a fixed point". Likewise, given a candidate for f, it's easy to verify that Y f = f (Y f), as you've shown, but can the f be found without some kind of "search"? Since there are recursive functions that aren't primitive recursive, the answer has to be "no".
Finally, you've exhibited a sequence of fixed points, and in this case it's intuitively clear how these correspond to something we might call an infinite list. But is there an interpetration that makes this precise? The notation
ones = cons 1 ones ones = cons 1 (cons 1 ones) ...
is suggestive, but only suggestive (at least as far as I can see). Is there a model in which [1, 1 ...] is a real "thing" that is somehow "approached" by the finite lists?
You can easily verify that Y f = f (Y f)
LHS = Y f = (\f. (\x. f (x x)) (\x. f (x x))) f = (\x. f (x x)) (\x. f (x x)) = f ((\x. f (x x) (\x. f (x x)))
RHS = f (Y f) = f ((\f. (\x. f (x x)) (\x. f (x x))) f) = f ((\x. f (x x)) (\x. f (x x)))
So (\f. (\x. f (x x)) (\x. f (x x))) is a fixpoint combinator (one of infinitely many).
-- Lennart
-- Lennart
=== Gregory Woodhouse
"Interaction is the mind-body problem of computing."
--Philip Wadler
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Cale Gibbard wrote:
Y = (\f. (\x. f (x x)) (\x. f (x x)))
In a sense, the real definition of Y is Y f = f (Y f), this lambda term just happens to have that property, but such functions aren't rare.
Actually no, the "real" definition is the top one, because the other one isn't even a valid lambda term, since it's recursive! There is no such thing as recursion in the pure lambda calculus. Of course, there are many valid definitions of Y, as you point out, both recursive and non-recursive. But I do believe that the first one above is the most concise non-recursive definition. -Paul

On 18/11/05, Paul Hudak
Cale Gibbard wrote:
Y = (\f. (\x. f (x x)) (\x. f (x x)))
In a sense, the real definition of Y is Y f = f (Y f), this lambda term just happens to have that property, but such functions aren't rare.
Actually no, the "real" definition is the top one, because the other one isn't even a valid lambda term, since it's recursive! There is no such thing as recursion in the pure lambda calculus.
I meant in the sense that Y f = f (Y f) is an axiomatic definition of Y. We're satisfied with any concrete definition of Y which meets that criterion. Of course it's not actually a lambda term, but it is an equation which a lambda term Y can satisfy. Similarly in, say, set theory, we don't say what sets *are* so much as we say what it is that they satisfy, and there are many models of set theory which meet those axioms. Y f = f (Y f) is really the only important property of Y, and any model of it will do as a concrete definition. - Cale
Of course, there are many valid definitions of Y, as you point out, both recursive and non-recursive. But I do believe that the first one above is the most concise non-recursive definition.
-Paul

My simple-mindness and naïveté begin to bother me. I begin to get lost, and I don't know anymore what is the problem... Greg Woodhouse a écrit:
--- Paul Hudak
wrote: ... The important property of Y is this:
Y f = f (Y f)
Right. This is just a formal statement of the property thaat f fixex Y f. I'm with you so far.
No, not exactly. This *may be* the *definition* of Y in a typed system (Haskell). On the other hand you cannot make Y as a self-applying lambda construct for obvious reasons, and still you can sleep well... ...
Now note that if we define g as (\ones. cons 1 (cons 1 ones)), we get:
Y g ==> g (Y g) ==> cons 1 (cons 1 (Y g))
Now, this is where things get a little mysterious. Where did g come from?
I understand that the "x x" om formal definition of Y is what makes everything work (or, at least I think I do).
Not exactly. Let's do it in Haskell, without this (x x) stuff... My `g`means something different, though. fix f = f (fix f) -- Here you have your Y. No typeless lambda. g f n = n : f n -- This is a generic *non-recursive* `repeat` ones = fix g 1 -- Guess what. Now (take 30 ones) works perfectly well. 'ones' is a piece of co-data, or a co-recursive stream as many people, e.g., David Turner would say. It has no finite form. Yet, we live happy with, provided we have a normal reduction scheme (laziness). Why do you want (in a further posting) to have a "real thing" which for you means a finite list? Are you sure that everybody needs the LEAST fixed point? The co-streams give you something different... Frankly, I don't know what is the issue... Jerzy Karczmarczuk

--- jerzy.karczmarczuk@info.unicaen.fr wrote:
fix f = f (fix f) -- Here you have your Y. No typeless lambda.
g f n = n : f n -- This is a generic *non-recursive* `repeat` ones = fix g 1 -- Guess what.
Very nice! I honestly would not have expected this to work. Simple
cases of lazy evaluation are one thing, but this is impressive.
Incidentally, I only(!) have accces to Hugs here in the office, but
defining
fix f = f (fix f)
g f n = n : f n
h f = 1 : map (+1) f
I see that the number of reductions required for each successive term
in h actually seems to go down.
Main> take 1 (fix h)
[1]
(65 reductions, 95 cells)
Main> take 2 (fix h)
[1,2]
(96 reductions, 138 cells)
Main> take 3 (fix h)
[1,2,3]
(123 reductions, 178 cells)
Main> take 4 (fix h)
[1,2,3,4]
(150 reductions, 218 cells)
Main> take 5 (fix h)
[1,2,3,4,5]
(177 reductions, 258 cells)
Main> take 6 (fix h)
[1,2,3,4,5,6]
(204 reductions, 298 cells)
(In case it's not obvious, I'm a complete Haskell newbie, and find this
rather impressive.)
Main>
===
Gregory Woodhouse

On Saturday 19 November 2005 02:16, Greg Woodhouse wrote:
--- jerzy.karczmarczuk@info.unicaen.fr wrote:
fix f = f (fix f) -- Here you have your Y. No typeless lambda.
g f n = n : f n -- This is a generic *non-recursive* `repeat` ones = fix g 1 -- Guess what.
Very nice! I honestly would not have expected this to work. Simple cases of lazy evaluation are one thing, but this is impressive.
[You should read some of his papers, for instance "the most unreliable techique in the world to compute pi". I was ROTFL when I saw the title and reading it was an eye-opener and fun too.] Ben

2005/11/19, Benjamin Franksen
[You should read some of his papers, for instance "the most unreliable techique in the world to compute pi". I was ROTFL when I saw the title and reading it was an eye-opener and fun too.]
Care to post a link? Seems interesting, but unknown to google :) -- Gracjan

http://users.info.unicaen.fr/~karczma/arpap/ On Sat, 19 Nov 2005, Gracjan Polak wrote:
2005/11/19, Benjamin Franksen
[You should read some of his papers, for instance "the most unreliable techique in the world to compute pi". I was ROTFL when I saw the title and reading it was an eye-opener and fun too.]
Care to post a link? Seems interesting, but unknown to google :)
-- Gracjan _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Saturday 19 November 2005 11:56, Gracjan Polak wrote:
2005/11/19, Benjamin Franksen
[You should read some of his papers, for instance "the most unreliable techique in the world to compute pi". I was ROTFL when I saw the title and reading it was an eye-opener and fun too.]
Care to post a link? Seems interesting, but unknown to google :)
Some are available here: http://users.info.unicaen.fr/~karczma/arpap/ Ben

On Sat, Nov 19, 2005 at 02:27:06AM +0100, Benjamin Franksen wrote:
[You should read some of his papers, for instance "the most unreliable techique in the world to compute pi". I was ROTFL when I saw the title and reading it was an eye-opener and fun too.]
In addition to being clever and terribly funny, the conclusion foreshadows (inspired?) later work on Enron [1]. Andrew [1] http://www.haskell.org/humor/enron.html

Andrew Pimlott: //about my highly spiritual essay on lazy computing of PI//:
In addition to being clever and terribly funny, the conclusion foreshadows (inspired?) later work on Enron [1].
Come on, it is improbable that Master Simon ever read my essay... No,... no comparison. His work on contracts and the usage of FP for this funny branch of math which serves to generate (and to destroy...) *real* money, is based on a very serious formal research. (So serious that when Simon PJ presented that, people looked at him so attentively and concentrated, that I saw true physical pain on their faces, and I wondered whether it would be a bad idea to call some ambulances...) Jerzy Karczmarczuk

On Sunday 27 November 2005 11:28, jerzy.karczmarczuk@info.unicaen.fr wrote:
Andrew Pimlott:
//about my highly spiritual essay on lazy computing of PI//:
In addition to being clever and terribly funny, the conclusion foreshadows (inspired?) later work on Enron [1].
Come on, it is improbable that Master Simon ever read my essay...
No,... no comparison. His work on contracts and the usage of FP for this funny branch of math which serves to generate (and to destroy...) *real* money, is based on a very serious formal research.
Hmm, do you mean the 'real' money that gets destroyed (for instance) every time a loan is payed back? (like in: bank assets minus outstanding loan; bank liabilities minus amount payed back from checking account (= M1 money).) These are the wonders of double entry bookkeeping... 'real' money gone with the push of a button... ;) Cheers, Ben

On Fri, Nov 18, 2005 at 11:37:40AM -0500, Paul Hudak wrote:
This is a very late response to an old thread...
ditto :-)
unwind :: Expr -> Expr unwind (Add e1 e2) = Add (unwind e1) (unwind e2) unwind (Rec fe) = x where x = unwind (fe x) unwind e = e
Since this discussion started around observing sharing in the implementation, I wanted to see whether, by the time we convert your cunning representation back into an infinite data structure, we have the sharing we hoped to observe in the first place.
fe2 e = Add (Const 1) e -- recursive e2 = Rec fe2 -- top-level loop e2u = unwind e2 -- infinite
main = walk e4u where walk (Add e1 e2) = walk e2 blows up, showing that we do not. The problem with unwind seems to be that the computation of the fixed point keeps calling unwind, which keeps reconstructing the fixed point: e2u = unwind e2 = x where x = unwind ((\e -> Add (Const 1) e) x) = Add (Const 1) (unwind x) = Add (Const 1) (Add (Const 1) (unwind (unwind x))) = ... I then tried unwind (Rec fe) = unwind (fix fe) fix f = x where x = f x even though I didn't think it would work: (fix fe) would create a properly sharing structure, but unwind would unshare it: e2u = unwind (x where x = ((\e -> Add (Const 1) e) x)) ( = Add (Const 1) x) = Add (Const 1) (unwind (x where x = Add (Const 1) x)) = Add (Const 1) (Add (Const 1) (unwind (x where x = Add (Const 1) x))) = ... This does blow up in ghci, but not in ghc (6.4.1), even without optimization. I'm not quite sure why, but anyway I want a version that exhibits sharing even in any reasonable implementation. Your message gives the technique (in mapE); we only have to apply it to unwind. But there is a problem--your code has a bug!
mapE :: (Int->Int) -> Expr -> Expr mapE f e = mapE' f e 0 where mapE' f (Const i) n = Const (f i) mapE' f (Add e1 e2) n = Add (mapE' f e1 n) (mapE' f e2 n) mapE' f (Rec fe) n = Rec (absLoop n (mapE' f (fe (Loop n)) (n+1))) mapE' f (Loop i) n = Loop i
absLoop :: Int -> Expr -> Fix Expr absLoop n e = \e' -> let abs (Loop n') | n==n' = e' abs (Add e1 e2) = Add (abs e1) (abs e2) abs e = e in abs e
e4 = Rec (\e1-> Add (Const 1) (Rec (\e2-> Add e1 e2))) -- nested loop e7 = mapE succ e4 e8 = Rec (\e1-> Add (Const 2) (Rec (\e2-> Add e1 e2))) b4 = e7==e8 -- returns True!
Notice that absLoop does not look inside Rec. But there could be a Loop (with the right n) there! e7 is actually Rec (\e1-> Add (Const 2) (Rec (\e2-> Add (Loop 0) e2))) We might also cast a suspicious eye at (==), which spuriously returned True! Really, we want absLoop to eliminate all the Loops it can find. But it can't, because it only knows the replacement expression for one Loop. It would be simpler for the Loop just to contain the expression. To enable that, I added a constructor Stop that is like Loop, except it takes an Expr instead of an Int. I use this constructor for my sharing unwind as well; Loop is only needed for (==). (It would probably be even better to add an annotation type argument to Expr; this could enable stricter typechecking that would have caught the bug.) Complete code:
{-# OPTIONS -fglasgow-exts #-}
data Expr = Const Int | Add Expr Expr | Rec (Fix Expr) -- implicit loop | Loop ID -- not exported | Stop Expr
type Fix a = a -> a type ID = Int
instance Eq Expr where e1 == e2 = let eq (Const x) (Const y) n = x == y eq (Loop i1) (Loop i2) n = i1 == i2 eq (Add e1 e2) (Add e1' e2') n = eq e1 e1' n && eq e2 e2' n eq (Rec fe1) (Rec fe2) n = eq (fe1 (Loop n)) (fe2 (Loop n)) (n+1) eq _ _ n = False in eq e1 e2 0
unwind :: Expr -> Expr unwind e = absStop (unwind' e) where unwind' (Add e1 e2) = Add (unwind' e1) (unwind' e2) unwind' (Rec fe) = Stop e where e = absStop (unwind' (fe (Stop e))) unwind' e = e
mapE :: (Int->Int) -> Expr -> Expr mapE f e = mapE' e where mapE' (Const i) = Const (f i) mapE' (Add e1 e2) = Add (mapE' e1) (mapE' e2) mapE' (Rec fe) = Rec (\e -> absStop (mapE' (fe (Stop e)))) mapE' e@(Stop _) = e
Replacement for absLoop that removes all Stops, unlike absLoop which only removed the Loops that its caller owned.
absStop (Stop e) = e absStop (Add e1 e2) = Add (absStop e1) (absStop e2) absStop e = e
The mapE examples still work ...
e4 = Rec (\e1-> Add (Const 1) (Rec (\e2-> Add e1 e2))) -- nested loop e4u = unwind e4 -- infinite e7 = mapE succ e4 e8 = Rec (\e1-> Add (Const 2) (Rec (\e2-> Add e1 e2))) b4 = e7==e8 -- returns True!
... and we verify that we didn't leave behind any Loops/Stops.
hasLoop (Const _) = False hasLoop (Add e1 e2) = hasLoop e1 || hasLoop e2 hasLoop (Rec fe) = hasLoop (fe (Const 1)) hasLoop (Loop _) = True hasLoop (Stop _) = True
hasLoop e7 -- False
Unwound infinite data structures (even with nested loops) exhibit sharing: main runs in constant space.
main = walk e4u where walk (Add e1 e2) = walk e2
Andrew

Another belated reply to an old thread... Andrew Pimlott wrote:
On Fri, Nov 18, 2005, Paul Hudak wrote:
unwind :: Expr -> Expr unwind (Add e1 e2) = Add (unwind e1) (unwind e2) unwind (Rec fe) = x where x = unwind (fe x) unwind e = e
Since this discussion started around observing sharing in the implementation, I wanted to see whether, by the time we convert your cunning representation back into an infinite data structure, we have the sharing we hoped to observe in the first place.
fe2 e = Add (Const 1) e -- recursive e2 = Rec fe2 -- top-level loop e2u = unwind e2 -- infinite
main = walk e4u where walk (Add e1 e2) = walk e2
blows up, showing that we do not. The problem with unwind seems to be that the computation of the fixed point keeps calling unwind, which keeps reconstructing the fixed point: ...
The first solution I gave to the original email from Tom Hawkins doesn't blow up, but you are right that the expanded version (to deal with nested loops) does blow up. Actually, my original definition of unwind was this:
unwind :: Expr -> Expr unwind (Add e1 e2) = Add (unwind e1) (unwind e2) unwind (Rec fe) = x where x = fe x unwind e = e
Indeed this works fine on the above example -- i.e. it doesn't blow up (in Hugs, at least). The only reason I added the extra call to unwind is to handle nested loops, but sadly that causes loss of sharing, as you have pointed out.
I then tried
unwind (Rec fe) = unwind (fix fe) fix f = x where x = f x
even though I didn't think it would work: (fix fe) would create a properly sharing structure, but unwind would unshare it:
e2u = unwind (x where x = ((\e -> Add (Const 1) e) x)) ( = Add (Const 1) x) = Add (Const 1) (unwind (x where x = Add (Const 1) x)) = Add (Const 1) (Add (Const 1) (unwind (x where x = Add (Const 1) x))) = ...
This does blow up in ghci, but not in ghc (6.4.1), even without optimization. I'm not quite sure why, ...
Interesting... This is arguably the most elegant solution, since it just replaces the constructor "Rec" with the function "fix". Perhaps surprisingly, it doesn't blow up in Hugs either! If we unwind e2 we get: unwind e2 => unwind (Rec (\e-> Add one e)) => unwind (fix (\e-> Add one e)) => unwind (x where x = (\e-> Add one e) x) => unwind (x where x = Add one x) The question is, what happens next? It seems to me that to prevent blow-up the interpreter would have to do something like this: => x where x = Add one (unwind x) but I don't know how this happens. I would have expected the unwinding that you showed above. Perhaps someone who understands either GHC or Hugs can explain this -- I suspect it has something to do with the interpretation of "where". It would be great if there were a simple technique that one could expect every reasonable implementation to use.
... but anyway I want a version that exhibits sharing even in any reasonable implementation. Your message gives the technique (in mapE); we only have to apply it to unwind. But there is a problem--your code has a bug!
mapE :: (Int->Int) -> Expr -> Expr mapE f e = mapE' f e 0 where mapE' f (Const i) n = Const (f i) mapE' f (Add e1 e2) n = Add (mapE' f e1 n) (mapE' f e2 n) mapE' f (Rec fe) n = Rec (absLoop n (mapE' f (fe (Loop n)) (n+1))) mapE' f (Loop i) n = Loop i
absLoop :: Int -> Expr -> Fix Expr absLoop n e = \e' -> let abs (Loop n') | n==n' = e' abs (Add e1 e2) = Add (abs e1) (abs e2) abs e = e in abs e
e4 = Rec (\e1-> Add (Const 1) (Rec (\e2-> Add e1 e2))) -- nested loop e7 = mapE succ e4 e8 = Rec (\e1-> Add (Const 2) (Rec (\e2-> Add e1 e2))) b4 = e7==e8 -- returns True!
Notice that absLoop does not look inside Rec. But there could be a Loop (with the right n) there!
Thanks for catching this, and your solution seems right to me. See further comments below.
Really, we want absLoop to eliminate all the Loops it can find. But it can't, because it only knows the replacement expression for one Loop. It would be simpler for the Loop just to contain the expression. To enable that, I added a constructor Stop that is like Loop, except it takes an Expr instead of an Int. I use this constructor for my sharing unwind as well; Loop is only needed for (==). (It would probably be even better to add an annotation type argument to Expr; this could enable stricter typechecking that would have caught the bug.)
Complete code:
data Expr = Const Int | Add Expr Expr | Rec (Fix Expr) -- implicit loop | Loop ID -- not exported | Stop Expr ...
It's interesting to note that with this datatype, the original example:
cyclic = Add (Const 1) cyclic
could be written as:
cyclic = Add (Const 1) (Stop cyclic)
instead of:
cyclic e = Add (Const 1) (Rec cyclic)
which raises the question of why we're bothering with Rec and fixpoints to begin with... Indeed, if we did just this:
data Expr = Const Int | Add Expr Expr | Tag ID Expr | Loop ID
Then we could encode loops *explicitly*, and also do equality checks, proper mapping, etc. In a sense, what your and my earlier solutions are doing is just turning a "Rec (\x-> e)" encoding into a "Tag i e / Loop i" encoding (and back again), where the id "i" serves the purpose of the variable "x". I've attached some more code below, in which I use an environment to "unwind" expressions, making the correspondence between ids and variables even stronger. Indeed, the idea of a applying a function to "open up" its body, and then doing an abstraction to recover the functional form, reminds me of higher-order abstract syntax, where one of the difficult issues is inspecting and manipulating terms "under a lambda". I think that we're treading on the same ground here. -Paul ---------------------------------------------------------------------
module CyclicE where
data Expr = Const Int | Add Expr Expr | Tag ID Expr | Loop ID
type ID = Int
Eq and Show
instance Eq Expr where Const x == Const y = x == y Add e1 e2 == Add e1' e2' = e1 == e1' && e2 == e2' Tag i e == Tag i' e' = i == i' Loop i == Loop i' = i == i' _ == _ = False
instance Show Expr where show (Const x) = "(Const " ++ show x ++ ")" show (Add e1 e2) = "(Add " ++ show e1 ++ " " ++ show e2 ++ ")" show (Tag i e) = "(Tag " ++ show i ++ " " ++ show e ++ ")" show (Loop i) = "(Loop " ++ show i ++ ")"
Examples:
e2,e3,e4 :: Expr e2 = Tag 1 (Add (Const 1) (Loop 1)) -- recursive e3 = Add (Const 0) e2 -- lower-level loop e4 = Tag 1 (Add (Const 1) (Tag 2 (Add (Loop 1) (Loop 2)))) -- nested loop
b1,b2,b3 :: Bool b1 = e3==e3 -- True b2 = e3==e2 -- False b3 = e4==e4 -- True
MapE:
mapE :: (Int->Int) -> Expr -> Expr mapE f (Const i) = Const (f i) mapE f (Add e1 e2) = Add (mapE f e1) (mapE f e2) mapE f (Tag i e) = Tag i (mapE f e) mapE f (Loop i) = Loop i
e5 = mapE succ e2 e6 = Tag 1 (Add (Const 2) (Loop 1)) b4 = e5==e6 -- True
e7 = mapE succ e4 e8 = Tag 1 (Add (Const 2) (Tag 2 (Add (Loop 1) (Loop 2)))) b5 = e7==e8 -- True
Unwind:
unwind :: Expr -> Expr unwind e = unw empty e where unw env (Const i) = Const i unw env (Add e1 e2) = Add (unw env e1) (unw env e2) unw env (Tag i e) = e' where e' = unw (upd i e' env) e unw env (Loop i) = fnd i env
Simple implementation of environments:
empty i = error ("unknown ID: " ++ show i) upd i e env = \i' -> if i==i' then e else env i' fnd i env = env i
e2u,e3u,e4u :: Expr e2u = unwind e2 -- infinite e3u = unwind e3 -- infinite e4u = unwind e4 -- infinite
walk (Add e1 e2) = walk e2 w2 = walk e2u -- no blow-up w3 = walk e3u -- no blow-up w4 = walk e4u -- no blow-up
participants (14)
-
Andrew Pimlott
-
Benjamin Franksen
-
Bulat Ziganshin
-
Cale Gibbard
-
Gracjan Polak
-
Greg Woodhouse
-
Henning Thielemann
-
Iavor Diatchki
-
Jacob Nelson
-
jerzy.karczmarczuk@info.unicaen.fr
-
Lennart Augustsson
-
Nils Anders Danielsson
-
Paul Hudak
-
Tom Hawkins