
The exercise is short. First, he defines a "fix" function: fix f = f (fix f) Which has the type (a -> a) -> a Then the function "remainder": remainder :: Integer -> Integer -> Integer remainder a b = if a < b then a else remainder (a - b) b The function fix, has far as i understand the matter, has no base case. So if someone wants to evaluate it, it will indefinitely apply the function f to itself, leading the hugs interpreter to its end. That's ok. But next he asks the reader to rewrite the function remainder using fix and i cannot find out how. I tried something like: remainder2 a b = fix (\x -> x - b) but there is no base case too. As soon as the lambda expression wants to evaluate its argument, the interpreter crashes. Can someone help me to solve the problem ? TIA Ludovic Kuty

Ludovic Kuty wrote:
The exercise is short.
First, he defines a "fix" function: fix f = f (fix f) Which has the type (a -> a) -> a
Then the function "remainder": remainder :: Integer -> Integer -> Integer remainder a b = if a < b then a else remainder (a - b) b
The function fix, has far as i understand the matter, has no base case. So if someone wants to evaluate it, it will indefinitely apply the function f to itself, leading the hugs interpreter to its end.
As a hint (but not a solution), attached are my own notes from when I was puzzling through fixed points myself a few years ago. This is a literate Haskell script that you can load into hugs and run. Note that my 'y' is the same as your 'fix'. I use y to derive a factorial function (fact) in terms of y that does terminate. ("Malkovich" is an obscure reference to the scene in the film "Being John Malkovich" where Malkovich goes through his own portal... :-)) -antony -- Antony Courtney Grad. Student, Dept. of Computer Science, Yale University antony@apocalypse.org http://www.apocalypse.org/pub/u/antony A definition of the Y combinator in Haskell. Valery calls this the "stupid" definition of the Y combinator since it uses Haskell's recursion. Of course, if it's the "stupid" definition, and it's not immediately obvious to me why it works, well....there's only one conclusion. Thankfully, however, I have other redeeming qualities. :-) Here was Valery's original formulation: y = \f -> let x = f x in x But, of course, Paul pointed out that this definition of y can be simplified to:
y f = f (y f)
Now let's define a generator to test out y:
factGen = \f -> \n -> if n==0 then 1 else n * f (n-1) fact = y factGen
Sure enough, if we run this under Hugs, it works: Main> fact 3 6 Main> Wow. mind-blowing. Unfortunately, I don't have "stepper" for Haskell, so I better walk through this myself to figure out what's going on. First, some types: y :: (a -> a) -> a factGen :: (Integer -> Integer) -> Integer -> Integer (Note, though, that since -> associates to the right, this is the same as: (Integer -> Integer) -> (Integer -> Integer) fact :: Integer -> Integer which all makes sense. Now let's trace through an evaluation like: fact 3 ==> (y factGen) 3 ==> ((\f -> let x = f x in x) factGen) 3 ==> (let x = factGen x in x) 3 (OK, I think this is the crucial step: ==> (let x = factGen x in (factGen x)) 3 ==> (let x = factGen x in ((\f -> \n -> if n==0 then 1 else n * f (n-1)) x)) 3 ==> (let x = factGen x in (\n -> if n==0 then 1 else n * x (n-1))) 3 ==> let x = factGen x in (if 3==0 then 1 else 3 * x (3-1)) ==> let x = factGen x in (3 * x (3-1)) ==> let x = factGen x in (3 * x 2) ==> let x = factGen x in (3 * (factGen x) 2) ==> let x = factGen x in (3 * (factGen x) 2) The above was with the original definition of y. Of course, this should be easier to follow with Paul's definition: fact 3 ==> (y factGen) 3 ==> (factGen (y factGen)) 3 ==> ((\f -> \n -> if n==0 then 1 else n * f (n-1)) (y factGen)) 3 ==> if 3==0 then 1 else 3 * ((y factGen) 2) ==> 3 * ((y factGen) 2) which obviously works. Ok, so what's going is this: x is bound to (factGen x). (factGen x) unfolds to: ((\f -> \n -> if n==0 then 1 else n * f (n-1)) x) which reduces to: \n -> if n==0 then 1 else n * x (n-1) but the "x" in the above will itself just unfold to (factGen x), yielding yet another recursive call! Note, too, that x has "type" Integer->Integer, although it is really a name for a *computation* that produces such a function when evaluated. mind-blowing. Another way to think of it: If f is Malkovich, then: x <--> (Malkovich x) <--> (Malkovich (Malkovich x)) <--> (Malkovich (Malkovich (Malkovich x))) <--> (Malkovich (Malkovich (Malkovich (Malkovich ... (Malkovich x))))) i.e. x unfolds to as many applications of "Malkovich" as we need to eventually reach Malkovich's fixed point. Of course, I used <--> just to indicate equivalence, not an actual reduction sequence. In normal order evaluation, we only unfold x when it's value is needed. But we'll keep doing that until we hit a fixed point (or diverge).

At 16:23 31/03/2002 -0500, Antony Courtney wrote:
Ludovic Kuty wrote:
The exercise is short. First, he defines a "fix" function: fix f = f (fix f) Which has the type (a -> a) -> a Then the function "remainder": remainder :: Integer -> Integer -> Integer remainder a b = if a < b then a else remainder (a - b) b The function fix, has far as i understand the matter, has no base case. So if someone wants to evaluate it, it will indefinitely apply the function f to itself, leading the hugs interpreter to its end.
... Now let's define a generator to test out y:
factGen = \f -> \n -> if n==0 then 1 else n * f (n-1) fact = y factGen
Thanks for the combinator. That's really obscure :) But it works: remainder2 :: Integer -> Integer -> Integer remainder2 = fix remainderGen where remainderGen f a b = if a < b then a else f (a - b) b

It's really not as obscure as it first seems. A fixpoint of a function foo is a value x such that foo x = x. The fix operator tells us one way (not the only way) to generate such a fixpoint: fix foo = foo (fix foo) Note from this equation that (fix foo) is, by definition (!), a fixpoint of foo, since foo (fix foo) is just (fix foo). Consider now any recursive definition: f = ... f ... This can be rewritten as: f = (\f -> ... f ...) f Note that the inner f is bound locally, so we can change names (which I only do for clarity): f = (\g -> ... g ...) f Now note that f is a fixpoint of \g -> ... g ... So all we need is a way to find this function's fixpoint. But from the above we know how to do that, and we are done: f = fix (\g -> ... g ...) -Paul

At 10:41 1/04/2002 -0500, Paul Hudak wrote:
It's really not as obscure as it first seems. A fixpoint of a function foo is a value x such that foo x = x. The fix operator tells us one way (not the only way) to generate such a fixpoint:
fix foo = foo (fix foo)
Note from this equation that (fix foo) is, by definition (!), a fixpoint of foo, since foo (fix foo) is just (fix foo).
Thanks for the explanations. I still have difficulties to understand the equation but i thought about it a long time and here are my thoughts. I consider the following line as a function definition in Haskell: fix foo = foo (fix foo) That is, a simple rule for rewriting the term to the left of the equal sign. It is Haskell (syntactic), not mathematic (from my point of view which is maybe wrong). Not all mathematical functions have a fixpoint. So the "fix" function can only be used with haskell functions that have a fixpoint, and i think those are the ones that have a base case (for the recursive ones). So one can think of the fix function as a fixpoint finder (or just searcher if there is no base case in the recursion). Generally speaking, if you take any mathematical function that has a fixpoint, the function "fix" will only lead to the fixpoint if the function converges to the fixpoint from the chosen start point. But thanks to the mechanism of lazy evaluation, the argument "(fix foo)" in "foo (fix foo)" is not evaluated before it is needed, that is, before we know we are in the inductive case. The fix function can also be viewed as a way to provide as many functions as needed. Infinitely many in fact. I was wondering if i am right or not ? Ludovic

I've been watching the discussion of the fixed point function with interest. In Antony's example,
factGen = \f -> \n -> if n==0 then 1 else n * f (n-1)
The fixed point of factGen is clearly n! because 0! = 1 and n! = n * (n-1) * (n-2) * .. * 1 = n * ( (n-1)! ) = n * f (n-1) so the factGen transformation simply maps the factorial function back onto itself. What I would like to know is how the 'fix' function could be used to find the fixed point of a function like ( \n -> 2*n ). If it isn't possible, can someone explain the crucial difference between n! and (\n -> 2*n ) which allows n factorial to be found via the fix point method and not zero. Tom

Tom Bevan wrote:
What I would like to know is how the 'fix' function could be used to find the fixed point of a function like ( \n -> 2*n ).
If it isn't possible, can someone explain the crucial difference between n! and (\n -> 2*n ) which allows n factorial to be found via the fix point method and not zero.
factGen = \f -> \n -> if n==0 then 1 else n * f (n-1) Note that factGen _|_ = \n -> if n == 0 then 1 else _|_. That is, _|_ is not a fix point of factGen. \n->n! is really the least (and only) fix
The 'fix' function does not yield *the* fix point of its argument. A function can have many fix points, for example the identity function 'id' has infinitely many fix points. Other functions such as (\n -> n+1) do not seem to have any fix point. I say "seem not to have any", because it depends which domain you regard. (\n -> n+1) does not have any fix point in the real numbers. However, '+ infinity' is a fix point of it. So (\n -> n+1) has a fix point in the real numbers plus '+ infinity'. In functional languages every type has one element that is often not mentioned: _|_ (bottom). For example 'Bool' has the three elements True,False and _|_. _|_ represents undefinedness, in particular non-termination. The value of 'undefined' and 'error "something"' are _|_. _|_ might look like a slightly strange value, but it is very useful for understanding the semantics of functional languages. Because of _|_, all functions defined by a functional program are total functions; talking about partial functions is often rather awkward. With _|_ the semantics of non-strict functional languages is easy to describe. _|_ helps understanding infinite data structures (see Chapter 9 of Bird's Introduction to Functional Programming using Haskell). And finally _|_ assures that every function defined by a functional program has at least one fix point (this property is not trivial to prove). Evaluation of 'fix (\n -> 2*n)' aborts with a stack overflow. This is not because the stack is too small, but because the value is _|_. _|_ = 2 * _|_. The 'fix' function yields the *least* fix point of its argument. *Least* with respect to a certain partial order on the elements of a type. In this order _|_ is the least element. Also for example (_|_,2) < (3,2). point of factGen. To learn more about this, denotational semantics, cpos and basic domain theory are good key words. Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767

Thanks Olaf, I'll follow up your references. Tom On Wed, 3 Apr 2002 21:40, Olaf Chitil wrote:
Tom Bevan wrote:
What I would like to know is how the 'fix' function could be used to find the fixed point of a function like ( \n -> 2*n ).
If it isn't possible, can someone explain the crucial difference between n! and (\n -> 2*n ) which allows n factorial to be found via the fix point method and not zero.
The 'fix' function does not yield *the* fix point of its argument. A function can have many fix points, for example the identity function 'id' has infinitely many fix points. Other functions such as (\n -> n+1) do not seem to have any fix point.
I say "seem not to have any", because it depends which domain you regard. (\n -> n+1) does not have any fix point in the real numbers. However, '+ infinity' is a fix point of it. So (\n -> n+1) has a fix point in the real numbers plus '+ infinity'.
In functional languages every type has one element that is often not mentioned: _|_ (bottom). For example 'Bool' has the three elements True,False and _|_.
_|_ represents undefinedness, in particular non-termination. The value of 'undefined' and 'error "something"' are _|_. _|_ might look like a slightly strange value, but it is very useful for understanding the semantics of functional languages. Because of _|_, all functions defined by a functional program are total functions; talking about partial functions is often rather awkward. With _|_ the semantics of non-strict functional languages is easy to describe. _|_ helps understanding infinite data structures (see Chapter 9 of Bird's Introduction to Functional Programming using Haskell). And finally _|_ assures that every function defined by a functional program has at least one fix point (this property is not trivial to prove).
Evaluation of 'fix (\n -> 2*n)' aborts with a stack overflow. This is not because the stack is too small, but because the value is _|_. _|_ = 2 * _|_.
The 'fix' function yields the *least* fix point of its argument. *Least* with respect to a certain partial order on the elements of a type. In this order _|_ is the least element. Also for example (_|_,2) < (3,2).
factGen = \f -> \n -> if n==0 then 1 else n * f (n-1)
Note that factGen _|_ = \n -> if n == 0 then 1 else _|_. That is, _|_ is not a fix point of factGen. \n->n! is really the least (and only) fix point of factGen.
To learn more about this, denotational semantics, cpos and basic domain theory are good key words.
Olaf

What I would like to know is how the 'fix' function could be used to find the fixed point of a function like ( \n -> 2*n ).
Olaf and Lennart said a little about least fixpoints, but little about what makes a fixpoint least. Olaf suggested looking up papers on domain theory or denotational semantics to understand it better. But the idea is really simple: just think of it as an ordering on information content. Bottom (_|_) contains NO information, and is thus the least value in any domain. The integers 1, 2, 3, etc. contain the same "amount" of information, but the information in each case is different -- thus these values are incomparable in the information ordering. So even though \n -> n has all these values as fixpoints, the LEAST one is _|_. As for \n -> 2n, even though 0 is a fixpoint, the LEAST one is _|_. And fix will find these for you, in the sense that fix applied to each of these functions leads to non-termination, and non-termination is equivalent to bottom, since it yields NO information. (The fact that in some implementations the stack or heap will blow up is an implementation artifact.)
If it isn't possible, can someone explain the crucial difference between n! and (\n -> 2*n) which allows n factorial to be found via the fix point method and not zero.
fix is able to find the fixpoint of: \f -> \n -> if (n==0) then 1 else n*f(n-1) This fixpoint is a FUNCTION, not the factorial value itself. So they are very different. Which begs the question: can fix find fixpints that are not functions, and not bottom? Consider this well-known example: ones = 1 : ones We can write this as: ones = (\wons -> 1 : wons) ones and thus: ones = fix (\wons -> 1 : wons) Try this in Hugs, and see what happens when you type "take 10 ones" at the prompt. -Paul
participants (5)
-
Antony Courtney
-
Ludovic Kuty
-
Olaf Chitil
-
Paul Hudak
-
Tom Bevan