
Hi, as pointed out in this list, it seems that a "tying the knot" example would be the one better explaining the power of Haskell's lazy-by-default approach against Python+Iterator's approach to laziness. So, in these days I'm trying to grasp the meaning of this "tying the knot" concept which seems to be quite hard to understand for me (at least as much as Monads and Delimited Continuations were). Specifically, I was looking for a very basic example of TTK and came up with this implementation of Fibonacci (again!) which might possibly be a TTK-flavored way for generation: fib = 1:1:fib `plus` (tail fib) where plus = zipWith (+) Of course, this was something I already encountered when exploring the Y-combinator. Anyhow, I tried to translate this implementation to Python using Iterators and this is what I wrote: def fib(): yield 1 yield 1 p = plus(fib(),tail(fib())) while True: yield p.next() def plus(x,y): while True: yield x.next() + y.next() print take(5,fib()) I've omitted the implementation for tail() and take() for brevity. Apart from the iterator machinery, this is an direct translation of the Haskell's fib implementation. More, it's quite modular because fib lives by itself and is composed with take to get the final result. The only caveat in the Python code is that it maintains an O(n^2) Iterator's states, thus making it a very poor implementation. So my considerations are: 1 - The Fibonacci generator is not an example of TTK at all and then it can be translated to Python. 2 - The Fibonacci generator is a too simple example of TTK, easy to be translated to Python. 3 - The O(n^2) state caveat is THE point making the difference between Haskell and Python, for which Haksell is much more efficient that Python while remaining very expressive and idiomatic (but that may not be the case for other TTK examples). I'm trying to implement myself the doubly linked list example from the Wikipage, which is "certified" to be a TTK example, but I'd like to have your comments on this. Thank you. Cristiano

On Wed, Jul 15, 2009 at 7:33 PM, Cristiano
Paris
fib = 1:1:fib `plus` (tail fib) where plus = zipWith (+)
Of course, this was something I already encountered when exploring the Y-combinator. Anyhow, I tried to translate this implementation to Python using Iterators and this is what I wrote:
def fib(): yield 1 yield 1
p = plus(fib(),tail(fib()))
while True: yield p.next()
def plus(x,y): while True: yield x.next() + y.next()
print take(5,fib())
I'm not convinced that this is the "same" program as the Haskell version. No knot is tied in the Python version. To tie the knot, a data structure must contain or refer to itself. In the python version, the function which creates the data structure refers to itself, but many copies of the data structure are created.
So my considerations are:
1 - The Fibonacci generator is not an example of TTK at all and then it can be translated to Python.
This indicates that you think tying the knot should be impossible in Python. In my opinion this is not the case. By my definition of tying the knot, one needs *either* mutable variables or laziness (at least in simple cases). Since Python has the former, it is possible to tie the knot in Python. To me the simplest example of TTK in Python (not possible in Haskell because it has an infinite type): x = [] x.append(x) (If you try to print this list, one gets [[...]] in the standard REPL and [<Recursion on list with id=173852620>] in ipython) --Max

On Wed, Jul 15, 2009 at 2:18 PM, Max Rabkin
fib = 1:1:fib `plus` (tail fib) where plus = zipWith (+) ... ... This indicates that you think tying the knot should be impossible in Python. In my opinion this is not the case. By my definition of tying
On Wed, Jul 15, 2009 at 7:33 PM, Cristiano Paris
wrote: the knot, one needs *either* mutable variables or laziness (at least in simple cases). Since Python has the former, it is possible to tie the knot in Python.
Isn't tying the knot (in the way 'fib' does) straightforward with closures a la Python/Ruby/Smalltalk (without mutation)? Even in a syntactically clumsy language like Java, a tying-the-knot implementation equivalent to the canonical Haskell one is not difficult, e.g. static L fibs = new L() { public int head() { return 1; } public L tail() { return new L() { public int head() { return 1; } public L tail() { return new L() { public int head() { return fibs.head() + fibs.tail().head(); } public L tail() { return zip(fibs.tail(), fibs.tail().tail()); } }; } }; } }; Given a definition of list L and zip... interface L { int head(); L tail(); } static L zip(final L l0, final L l1) { return new L() { public int head() { return l0.head() + l1.head(); } public L tail() { return zip(l0.tail(), l1.tail()); } }; }

Robert Greayer wrote:
Isn't tying the knot (in the way 'fib' does) straightforward with closures a la Python/Ruby/Smalltalk (without mutation)? Even in a syntactically clumsy language like Java, a tying-the-knot implementation equivalent to the canonical Haskell one is not difficult, e.g.
static L fibs = new L() { public int head() { return 1; } public L tail() { return new L() { public int head() { return 1; } public L tail() { return new L() { public int head() { return fibs.head() + fibs.tail().head(); } public L tail() { return zip(fibs.tail(), fibs.tail().tail()); } }; } }; } };
Given a definition of list L and zip...
interface L { int head(); L tail(); } static L zip(final L l0, final L l1) { return new L() { public int head() { return l0.head() + l1.head(); } public L tail() { return zip(l0.tail(), l1.tail()); } }; }
Are you sure there's not a super-linear time complexity hidden in there? Unless Java compilers are clever enough to memoize this kind of code, I think each subsequent call to the head() will just recurse all the way down to the bottom an exponentially increasing number of times. To simulate laziness, I think you would need to actually store fibs *as data* somewhere, and you'd presumably need to simulate the process of replacing a thunk with its value on its first evaluation. In python, for example: class value: def __init__(self, *value): self.value = value def __call__(self): return self.value class thunk: def __init__(self, susp): self.susp = susp def __call__(self): try: return self.result except: self.result = self.susp() del self.susp return self.result def tail(xs): x,r = xs() return r def zipWithPlus(xs,ys): x,xr = xs() y,yr = ys() return x+y, thunk(lambda: zipWithPlus(xr,yr)) fibs = value(0, value(1, thunk(lambda: zipWithPlus(fibs, tail(fibs))))) def fibgen(): g = fibs while True: x,g = g() yield x Needless to say: I prefer the Haskell!

Thank you all for your answers and sorry for the delay I'm writing this message but before replying, I wanted to be sure to understand your arguments! Now, I'm starting to get into this "tying the knot" thing and understand why the Haskell version of fib ties the knot while my Python version does not. It seems all related to the thunk thing, i.e. in the Haskell version the subsequent calls to fib are not actual calls because they all refers to the same thunk, which is evaluated "on demand". Now, to confirm my hypothesis, I wrote a slight different version of fib, like follows: fib' n = 1:1:(fib' n) `plus` (tail $ fib' n) where plus = zipWith (+) i.e. I inserted a fictious argument n in the definition of fib'. Now, if I try "take 30 $ fib' 100", it takes significntly longer than "take 30 fib": specifically, the latter is instantaneous, while the former takes about 5 seconds to complete on my MacBook Pro. Is this an evidence that the "tying the knot" process is going on in the first version? More, I've read that a fully lazy language would memoize all functions by default: in this case, even fib' would have been tying the knot. But this is not the case of Haskell. Am I wrong? Thank you, Cristiano

On 17 Jul 2009, at 12:41, Cristiano Paris wrote:
Thank you all for your answers and sorry for the delay I'm writing this message but before replying, I wanted to be sure to understand your arguments!
Now, I'm starting to get into this "tying the knot" thing and understand why the Haskell version of fib ties the knot while my Python version does not. It seems all related to the thunk thing, i.e. in the Haskell version the subsequent calls to fib are not actual calls because they all refers to the same thunk, which is evaluated "on demand".
Now, to confirm my hypothesis, I wrote a slight different version of fib, like follows:
fib' n = 1:1:(fib' n) `plus` (tail $ fib' n) where plus = zipWith (+)
i.e. I inserted a fictious argument n in the definition of fib'.
Now, if I try "take 30 $ fib' 100", it takes significntly longer than "take 30 fib": specifically, the latter is instantaneous, while the former takes about 5 seconds to complete on my MacBook Pro. Is this an evidence that the "tying the knot" process is going on in the first version?
That's correct
More, I've read that a fully lazy language would memoize all functions by default: in this case, even fib' would have been tying the knot. But this is not the case of Haskell. Am I wrong?
Memoization is not a feature of lazyness. If you can do it in such a way that you don't waste significant amount of RAM, then it may be a nice optimisation, and an alternative evaluation strategy, but it would not be lazy. Bob

On Fri, Jul 17, 2009 at 12:46 PM, Thomas Davie
Memoization is not a feature of lazyness. If you can do it in such a way that you don't waste significant amount of RAM, then it may be a nice optimisation, and an alternative evaluation strategy, but it would not be lazy.
Thank you for pointing out. I'd like to share this link to a useful article about "circular programming", which helped me a lot: http://www.csse.monash.edu.au/~lloyd/tildeFP/1989SPE/ Thank you again. Cristiano
participants (6)
-
Cristiano Paris
-
Cristiano Paris
-
Matthew Brecknell
-
Max Rabkin
-
Robert Greayer
-
Thomas Davie