
<troll> Prelude> let f .! g = ((.) $! f) $! g Prelude> let f = undefined :: Int -> IO Int Prelude> f `seq` 42 *** Exception: Prelude.undefined Prelude> ((>>= f) . return) `seq` 42 42 Prelude> ((>>= f) .! return) `seq` 42 42 </troll> Regards, Yitz

On Tue, 2007-01-23 at 13:35 +0200, Yitzchak Gale wrote:
<troll>
Prelude> let f .! g = ((.) $! f) $! g Prelude> let f = undefined :: Int -> IO Int Prelude> f `seq` 42 *** Exception: Prelude.undefined Prelude> ((>>= f) . return) `seq` 42 42 Prelude> ((>>= f) .! return) `seq` 42 42
</troll>
Perhaps I'm missing something but I don't see what's wrong. I think what you're saying is that you want (>>=) to be strict in it's second argument. I don't see that this is a requirement of the monad laws. You'll note that you get the same behaviour for other monads like Maybe and []. I recall that there is some infidelity in standard implementations of the IO monad, but I don't think this is it. Duncan

I wrote:
Prelude> let f .! g = ((.) $! f) $! g Prelude> let f = undefined :: Int -> IO Int Prelude> f `seq` 42 *** Exception: Prelude.undefined Prelude> ((>>= f) . return) `seq` 42 42 Prelude> ((>>= f) .! return) `seq` 42 42
Duncan Coutts wrote:
Perhaps I'm missing something but I don't see what's wrong.
The monad laws say that (>>= f) . return must be identical to f. The above shows that they are not identical for IO. Therefore, IO is not a monad.
I think what you're saying is that you want (>>=) to be strict in it's second argument. I don't see that this is a requirement of the monad laws.
Oh, no, I don't want that at all! Especially not for []! Where would we be then?
You'll note that you get the same behaviour for other monads like Maybe and [].
Yes. I am starting, as a programmer, from the practical problem that strictness properties are badly broken in MTL. But no one seems to want to fix it. Although it is clear that the current behavior is very wrong, no one seems to be able to define exactly what the correct behavior should be. To understand the problem better myself, I want to understand better the relationship between monads in category theory and strictness. The most common approach seems to be: Make believe that seq does not exist, and use the usual Haskell notions of functions to form a category. Then try to fix up strictness issues as an afterthought, without regard to category theory. The result is a mess. It would be disappointing to me if that is the best we can do. Another approach that we came up with recently on this list is that you can allow seq - in its current form - as a morphism, but use .! instead of . as composition in the category. I find that somewhat attractive, because .! essentially means "compose functions while preserving strictness/laziness". Unfortunately, the above paradox shows that this is not the complete answer either. A related issue is the claim that the current behavior of seq is wrong in some way. I am not convinced that there is any problem with the current behavior that \_->_|_ /= _|_, nor that changing it would solve any problems. This paradox also shows that an idea mentioned here a few days ago by Neil Mitchell: class Seq a where seq :: a -> b -> b is also not sufficient. Thanks, Yitz

Could you explain why would a class Seq not be sufficient? If there were a class Seq, I'd not want functions to be in that class. -- Lennart On Jan 23, 2007, at 08:57 , Yitzchak Gale wrote:
I wrote:
Prelude> let f .! g = ((.) $! f) $! g Prelude> let f = undefined :: Int -> IO Int Prelude> f `seq` 42 *** Exception: Prelude.undefined Prelude> ((>>= f) . return) `seq` 42 42 Prelude> ((>>= f) .! return) `seq` 42 42
Duncan Coutts wrote:
Perhaps I'm missing something but I don't see what's wrong.
The monad laws say that (>>= f) . return must be identical to f. The above shows that they are not identical for IO. Therefore, IO is not a monad.
I think what you're saying is that you want (>>=) to be strict in it's second argument. I don't see that this is a requirement of the monad laws.
Oh, no, I don't want that at all! Especially not for []! Where would we be then?
You'll note that you get the same behaviour for other monads like Maybe and [].
Yes.
I am starting, as a programmer, from the practical problem that strictness properties are badly broken in MTL. But no one seems to want to fix it. Although it is clear that the current behavior is very wrong, no one seems to be able to define exactly what the correct behavior should be.
To understand the problem better myself, I want to understand better the relationship between monads in category theory and strictness.
The most common approach seems to be: Make believe that seq does not exist, and use the usual Haskell notions of functions to form a category. Then try to fix up strictness issues as an afterthought, without regard to category theory. The result is a mess. It would be disappointing to me if that is the best we can do.
Another approach that we came up with recently on this list is that you can allow seq - in its current form - as a morphism, but use .! instead of . as composition in the category. I find that somewhat attractive, because .! essentially means "compose functions while preserving strictness/laziness".
Unfortunately, the above paradox shows that this is not the complete answer either.
A related issue is the claim that the current behavior of seq is wrong in some way. I am not convinced that there is any problem with the current behavior that \_->_|_ /= _|_, nor that changing it would solve any problems.
This paradox also shows that an idea mentioned here a few days ago by Neil Mitchell:
class Seq a where seq :: a -> b -> b
is also not sufficient.
Thanks, Yitz _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hi, Lennart Augustsson wrote:
Could you explain why would a class Seq not be sufficient? If there were a class Seq, I'd not want functions to be in that class.
Oh, I see. Well that is pretty much the same as ignoring seq altogether. I am hoping to get a better answer than that - where we can see how strictness questions fit in with the theory of monads. Anyway - why do you not want seq for functions? Are you making functions second-class citizens in Haskell, so that function-valued expressions can no longer be lazy? If not, then how do you force strictness in a function-valued expression? For example: Prelude Data.List> let fs = [const 0, const 1]::[Int->Int] Prelude Data.List> let nextf f g = fs !! ((f (0::Int) + g (0::Int))`mod`2) Prelude Data.List> :t nextf nextf :: (Int -> Int) -> (Int -> Int) -> Int -> Int Prelude Data.List> foldl' nextf id (concat $ replicate 100000 fs) 5 0 Prelude Data.List> foldl nextf id (concat $ replicate 100000 fs) 5 *** Exception: stack overflow Regards, Yitz

I don't think disallowing seq for functions makes them any more second class than not allow == for functions. I'm willing to sacrifice seq on functions to get parametricity back. There is a good reason seq cannot be defined for functions in the pure lambda calculus... It doesn't belong there. :) -- Lennart On Jan 23, 2007, at 11:06 , Yitzchak Gale wrote:
Hi,
Lennart Augustsson wrote:
Could you explain why would a class Seq not be sufficient? If there were a class Seq, I'd not want functions to be in that class.
Oh, I see. Well that is pretty much the same as ignoring seq altogether. I am hoping to get a better answer than that - where we can see how strictness questions fit in with the theory of monads.
Anyway - why do you not want seq for functions? Are you making functions second-class citizens in Haskell, so that function-valued expressions can no longer be lazy? If not, then how do you force strictness in a function-valued expression?
For example:
Prelude Data.List> let fs = [const 0, const 1]::[Int->Int] Prelude Data.List> let nextf f g = fs !! ((f (0::Int) + g (0::Int)) `mod`2) Prelude Data.List> :t nextf nextf :: (Int -> Int) -> (Int -> Int) -> Int -> Int Prelude Data.List> foldl' nextf id (concat $ replicate 100000 fs) 5 0 Prelude Data.List> foldl nextf id (concat $ replicate 100000 fs) 5 *** Exception: stack overflow
Regards, Yitz

Lennart Augustsson wrote:
I don't think disallowing seq for functions makes them any more second class than not allow == for functions. I'm willing to sacrifice seq on functions to get parametricity back.
The underlying assumption that having seq makes us lose parametricity is a (widely spread) misconception, I think. Compare to the situation with fix (or, equivalently, general recursive definitions). The original notion of parametricity was developed for a calculus without fixpoints. If one allows fixpoints, then the original notion suddenly does not work anymore (see the last section of Wadler's original paper). So one might say: fixpoints break parametricity, so they are evil and should not be there. Of course, this is not what was done. Rather, the notion of parametricity was revised by changing the definition of the underlying logical relation to be more restrictive in the choice of relations to quantify over. Now, seq enters the picture. It turns out that even the revised notion of parametricity does not work for it. So one could come to the conclusion: seq breaks parametricity, so it is evil and should not be there. But this is a prejudice, because one can alternatively proceed just as for fixpoints, namely revise the notion of parametricity so that it works for both fixpoints and seq. There are more details on how to do this (and the consequences of doing so) than you probably want to see in the following papers: http://wwwtcs.inf.tu-dresden.de/~voigt/p76-voigtlaender.pdf http://wwwtcs.inf.tu-dresden.de/~voigt/seqFinal.pdf http://wwwtcs.inf.tu-dresden.de/~voigt/TUD-FI06-02.pdf Whether one likes the resulting notion of parametricity or not, one thing becomes clear: it is not any more justified to say that introducing seq "destroys" parametricity than to say that already introducing fix "destroyed" parametricity. In both cases, it was "just" necessary to adapt the concept of parametricity. Doing so then allows the same kind of reasoning and results as before, but for richer calculi.
There is a good reason seq cannot be defined for functions in the pure lambda calculus... It doesn't belong there. :)
How about the same argument for general recursion? As in: There is a good reason (typability) that fixpoint combinators cannot be defined in the pure lambda calculus... They don't belong there. Would anyone conclude from this that we should throw general recursive definitions out of Haskell? I doubt so. Note that nothing of the above implies that I think fully polymorphic seq is nice and cosy and should be in Haskell forever (even though it provided lot of nice research food for me so far ;-). My only point is that it is unfair to cite seq's supposed destruction of parametricity (as in "all or nothing") as an argument in weighing this decision. Ciao, Janis. -- Dr. Janis Voigtlaender http://wwwtcs.inf.tu-dresden.de/~voigt/ mailto:voigt@tcs.inf.tu-dresden.de

Janis Voigtlaender wrote:
Lennart Augustsson wrote:
There is a good reason seq cannot be defined for functions in the pure lambda calculus... It doesn't belong there. :)
How about the same argument for general recursion? As in: There is a good reason (typability) that fixpoint combinators cannot be defined in the pure lambda calculus... They don't belong there.
Maybe by "pure lambda calculus" you meant the untyped one. Then my comparative argument does not work anymore, because fixpoints are definable in the untyped setting, whereas seq isn't. Still, it remains questionable whether this provides an argument for removing seq but retaining fixpoints in Haskell. Why is the untyped setting the point of reference? In any case, the argument via losing parametricity does not hold. It is not necessary to sacrifice seq to get parametricity back. Not any more than it was necessary to sacrifice seq to get parametricity back. Ciao, Janis. -- Dr. Janis Voigtlaender http://wwwtcs.inf.tu-dresden.de/~voigt/ mailto:voigt@tcs.inf.tu-dresden.de

Janis Voigtlaender wrote:
Janis Voigtlaender wrote:
Lennart Augustsson wrote:
There is a good reason seq cannot be defined for functions in the pure lambda calculus... It doesn't belong there. :)
How about the same argument for general recursion? As in: There is a good reason (typability) that fixpoint combinators cannot be defined in the pure lambda calculus... They don't belong there.
Maybe by "pure lambda calculus" you meant the untyped one. Then my comparative argument does not work anymore, because fixpoints are definable in the untyped setting, whereas seq isn't. Still, it remains questionable whether this provides an argument for removing seq but retaining fixpoints in Haskell. Why is the untyped setting the point of reference?
In any case, the argument via losing parametricity does not hold. It is not necessary to sacrifice seq to get parametricity back. Not any more than it was necessary to sacrifice seq to get parametricity back.
Replace "seq" by "fix" in the last sentence ;-) -- Dr. Janis Voigtlaender http://wwwtcs.inf.tu-dresden.de/~voigt/ mailto:voigt@tcs.inf.tu-dresden.de

Well, I think fix destroys parametricity too, and it would be better to get rid of fix. But I'm not proposing to do that for Haskell, because I don't have a viable proposal to do so. (But I think the proposal would be along the same lines as the seq one; provide fix in a type class so we can keep tabs on it.) BTW, fix can be defined in the pure lambda calculus, just not in simply typed pure lambda calculus (when not qualified by "typed" the term "lambda calculus" usually refers to the untyped version). OK, so I was a bit harsh when I said seq destroys parametricity, it doesn't destroy it completely, just enough to ruin the properties used by, e.g., foldr/build. The problem with the current seq can also be seen by the fact that it doesn't work properly with the IO type. -- Lennart On Jan 24, 2007, at 04:11 , Janis Voigtlaender wrote:
Lennart Augustsson wrote:
I don't think disallowing seq for functions makes them any more second class than not allow == for functions. I'm willing to sacrifice seq on functions to get parametricity back.
The underlying assumption that having seq makes us lose parametricity is a (widely spread) misconception, I think. Compare to the situation with fix (or, equivalently, general recursive definitions). The original notion of parametricity was developed for a calculus without fixpoints. If one allows fixpoints, then the original notion suddenly does not work anymore (see the last section of Wadler's original paper). So one might say: fixpoints break parametricity, so they are evil and should not be there. Of course, this is not what was done. Rather, the notion of parametricity was revised by changing the definition of the underlying logical relation to be more restrictive in the choice of relations to quantify over. Now, seq enters the picture. It turns out that even the revised notion of parametricity does not work for it. So one could come to the conclusion: seq breaks parametricity, so it is evil and should not be there. But this is a prejudice, because one can alternatively proceed just as for fixpoints, namely revise the notion of parametricity so that it works for both fixpoints and seq. There are more details on how to do this (and the consequences of doing so) than you probably want to see in the following papers:
http://wwwtcs.inf.tu-dresden.de/~voigt/p76-voigtlaender.pdf
http://wwwtcs.inf.tu-dresden.de/~voigt/seqFinal.pdf
http://wwwtcs.inf.tu-dresden.de/~voigt/TUD-FI06-02.pdf
Whether one likes the resulting notion of parametricity or not, one thing becomes clear: it is not any more justified to say that introducing seq "destroys" parametricity than to say that already introducing fix "destroyed" parametricity. In both cases, it was "just" necessary to adapt the concept of parametricity. Doing so then allows the same kind of reasoning and results as before, but for richer calculi.
There is a good reason seq cannot be defined for functions in the pure lambda calculus... It doesn't belong there. :)
How about the same argument for general recursion? As in: There is a good reason (typability) that fixpoint combinators cannot be defined in the pure lambda calculus... They don't belong there.
Would anyone conclude from this that we should throw general recursive definitions out of Haskell? I doubt so.
Note that nothing of the above implies that I think fully polymorphic seq is nice and cosy and should be in Haskell forever (even though it provided lot of nice research food for me so far ;-). My only point is that it is unfair to cite seq's supposed destruction of parametricity (as in "all or nothing") as an argument in weighing this decision.
Ciao, Janis.
-- Dr. Janis Voigtlaender http://wwwtcs.inf.tu-dresden.de/~voigt/ mailto:voigt@tcs.inf.tu-dresden.de

Lennart Augustsson wrote:
Well, I think fix destroys parametricity too, and it would be better to get rid of fix. But I'm not proposing to do that for Haskell, because I don't have a viable proposal to do so. (But I think the proposal would be along the same lines as the seq one; provide fix in a type class so we can keep tabs on it.)
This idea has actually been investigated (as you might know): http://portal.acm.org/citation.cfm?coll=GUIDE&dl=GUIDE&id=651452 That paper explains exactly how fix's impact on parametricity can be tamed. I don't think it ever went into a practical system, though.
BTW, fix can be defined in the pure lambda calculus, just not in simply typed pure lambda calculus (when not qualified by "typed" the term "lambda calculus" usually refers to the untyped version).
Yup, see my self-correction earlier on the list.
OK, so I was a bit harsh when I said seq destroys parametricity, it doesn't destroy it completely, just enough to ruin the properties used by, e.g., foldr/build.
But not even completely so. Partial correctness is still given without restrictions. See http://haskell.org/haskellwiki/Correctness_of_short_cut_fusion (also explaining that the nowadays maybe even more popular destroy/unfoldr-dual of foldr/build does not even need seq to fail being correct)
The problem with the current seq can also be seen by the fact that it doesn't work properly with the IO type.
Yes, seq is from the dark side in many respects. Ciao, Janis. -- Dr. Janis Voigtlaender http://wwwtcs.inf.tu-dresden.de/~voigt/ mailto:voigt@tcs.inf.tu-dresden.de

On Jan 24, 2007, at 8:27 AM, Lennart Augustsson wrote:
Well, I think fix destroys parametricity too, and it would be better to get rid of fix. But I'm not proposing to do that for Haskell, because I don't have a viable proposal to do so. (But I think the proposal would be along the same lines as the seq one; provide fix in a type class so we can keep tabs on it.) BTW, fix can be defined in the pure lambda calculus, just not in simply typed pure lambda calculus (when not qualified by "typed" the term "lambda calculus" usually refers to the untyped version).
I think its important to point out here that fix _can_ be defined in sufficiently rich typed lambda-calculi. You just need unrestricted recursive types (iso-recursive is sufficient). Since Haskell has those, you can't get rid of fix using typeclasses. You would also need something like the strict positivity restriction, which is a pretty heavyweight restriction. <code> newtype Mu a = Roll { unroll :: Mu a -> a } omega :: a omega = (\x -> (unroll x) x) (Roll (\x -> (unroll x) x)) fix :: (a -> a) -> a fix f = (\x -> f . (unroll x) x) (Roll (\x -> f . (unroll x) x)) omega ones :: [Int] ones = fix (1:) fibs :: [Int] fibs = fix (\f a b -> a : f b (a+b)) 0 1 main = print $ take 20 fibs </code> FYI, don't try to run this in GHC, because it gives the simplifier fits. [snip] Rob Dockins Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG

Limiting the value recursion is not enough, of course. You'd have to limit the type recursion as well. -- Lennart On Jan 24, 2007, at 10:41 , Robert Dockins wrote:
On Jan 24, 2007, at 8:27 AM, Lennart Augustsson wrote:
Well, I think fix destroys parametricity too, and it would be better to get rid of fix. But I'm not proposing to do that for Haskell, because I don't have a viable proposal to do so. (But I think the proposal would be along the same lines as the seq one; provide fix in a type class so we can keep tabs on it.) BTW, fix can be defined in the pure lambda calculus, just not in simply typed pure lambda calculus (when not qualified by "typed" the term "lambda calculus" usually refers to the untyped version).
I think its important to point out here that fix _can_ be defined in sufficiently rich typed lambda-calculi. You just need unrestricted recursive types (iso-recursive is sufficient). Since Haskell has those, you can't get rid of fix using typeclasses. You would also need something like the strict positivity restriction, which is a pretty heavyweight restriction.
<code>
newtype Mu a = Roll { unroll :: Mu a -> a }
omega :: a omega = (\x -> (unroll x) x) (Roll (\x -> (unroll x) x))
fix :: (a -> a) -> a fix f = (\x -> f . (unroll x) x) (Roll (\x -> f . (unroll x) x)) omega
ones :: [Int] ones = fix (1:)
fibs :: [Int] fibs = fix (\f a b -> a : f b (a+b)) 0 1
main = print $ take 20 fibs
</code>
FYI, don't try to run this in GHC, because it gives the simplifier fits.
[snip]
Rob Dockins
Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG

On Wednesday 24 January 2007 20:20, Stefan Monnier wrote:
FYI, don't try to run this in GHC, because it gives the simplifier fits.
You mean it triggers a bug in the inliner?
http://www.haskell.org/ghc/docs/latest/html/users_guide/bugs.html Third bullet in secion 12.2.1. I gather that GHC HQ has decided that the problem is pathological enough to sweep under the rug. I can't say I blame them. Really, the only reason to construct custom fixpoint combinators is to show that it can be done :-) Using the built-in facilities for recursion is far easier and almost certainly results in better code.
Stefan

On Wed, 24 Jan 2007 10:41:09 -0500, "Robert Dockins" wrote:
newtype Mu a = Roll { unroll :: Mu a -> a }
omega :: a omega = (\x -> (unroll x) x) (Roll (\x -> (unroll x) x))
fix :: (a -> a) -> a fix f = (\x -> f . (unroll x) x) (Roll (\x -> f . (unroll x) x)) omega
ones :: [Int] ones = fix (1:)
fibs :: [Int] fibs = fix (\f a b -> a : f b (a+b)) 0 1
That's an interesting definition of fix that I haven't seen before, though I am a little puzzled by omega. Since I have an irrational fear of recursion, and I like to take every opportunity I get to cure it, I decided to take a closer look... I figure omgea is just a way to write _|_ as an anonymous lambda expression. But that made me wonder what it's doing in the definition of fix. I can see that without it, fix would have the wrong type, since type inference gives the x parameters the type (Mu(b->a)):
-- A bit like fix, except it's, erm... broke :: (a -> a) -> b -> a broke f = (\x -> f . unroll x x) (Roll (\x -> f . unroll x x))
So omega consumes an argument that has unconstrained type, and which appears to be unused. It's perhaps easier to see the unused argument with fix rewritten in a more point-full style:
fix' f = (\x y -> f (unroll x x y)) (Roll (\x y -> f (unroll x x y))) omega
Performing the application, (fix' f) becomes (f(fix' f)), and so on. So, I think I follow how this fixed-point operator works, and it seems reasonable to use _|_ to consume an unused non-strict argument. But I find it mildly disturbing that this unused argument seems to appear out of nowhere. Then I noticed that rewriting fix without (.) seems to work just as well (modulo non-termination of the GHC inliner), and without the unused argument:
fix :: (a -> a) -> a fix f = (\x -> f (unroll x x)) (Roll (\x -> f (unroll x x)))
Of course, the corollary is that I can introduce as many unused arguments as I want:
fix'' f = (\x -> (f.).(unroll x x)) (Roll (\x -> (f.).(unroll x x))) omega omega fix''' f = (\x -> ((f.).).(unroll x x)) (Roll (\x -> ((f.).).(unroll x x))) omega omega omega -- etc...
This gave me a new way to approach the question of where the unused argument came from. Given a function (f) of appropriate type, I can write:
f :: a -> a (f.) :: (b -> a) -> (b -> a) ((f.).) :: (c -> b -> a) -> (c -> b -> a)
And so on. Nothing strange here. But all of these functions can inhabit the argument type of fix, so:
fix :: (a -> a) -> a fix f :: a fix (f.) :: b -> a fix ((f.).) :: c -> b -> a
Those are some strange types, and I have found those unused arguments without reference to any particular implementation of fix. Thinking about it, (forall a b . b -> a) is no stranger than (forall a . a). Indeed, I think the only thing that can have type (forall a . a) is _|_. Likewise, I can't imagine anything other than the identity function having the type (forall a . a -> a), and it's not too hard to see where (fix id) would lead. So perhaps it's not the appearance of the unused argument in the above definition of the fixed-point operator, but the type of the fixed-point operator in general that is a bit strange. Certainly, I have always found fix to be mysterious, even though I am getting quite comfortable with using it. I'm wondering: Is any of this related to the preceding discussion about how fix affects parametricity? Can anyone recommend some (preferably entry-level) readings?

On Sunday 28 January 2007 23:19, Matthew Brecknell wrote:
On Wed, 24 Jan 2007 10:41:09 -0500, "Robert Dockins" wrote:
newtype Mu a = Roll { unroll :: Mu a -> a }
omega :: a omega = (\x -> (unroll x) x) (Roll (\x -> (unroll x) x))
fix :: (a -> a) -> a fix f = (\x -> f . (unroll x) x) (Roll (\x -> f . (unroll x) x)) omega
ones :: [Int] ones = fix (1:)
fibs :: [Int] fibs = fix (\f a b -> a : f b (a+b)) 0 1
That's an interesting definition of fix that I haven't seen before, though I am a little puzzled by omega. Since I have an irrational fear of recursion, and I like to take every opportunity I get to cure it, I decided to take a closer look...
I figure omgea is just a way to write _|_ as an anonymous lambda expression.
Yup. If you type-erase it, you get the very familiar term: (\x -> x x) (\x -> x x) Which is the canonical non-terminating untyped lambda term.
But that made me wonder what it's doing in the definition of fix.
I like to think of fix as implementing the semantics of recursion via the ascending Kleene chain. Kleene's fixpoint theorem says that: least_fixpoint( f ) = least_upper_bound (f^i _|_ | i in N ) where f^i means f composed together i times. If you run it out, you'll see that my definition of fix calculates something like: (f . f . f . f ... ) _|_ === f (f (f (f ( ... _|_))))
I can see that without it, fix would have the wrong type, since
type inference gives the x parameters the type (Mu(b->a)):
-- A bit like fix, except it's, erm... broke :: (a -> a) -> b -> a broke f = (\x -> f . unroll x x) (Roll (\x -> f . unroll x x))
So omega consumes an argument that has unconstrained type, and which appears to be unused. It's perhaps easier to see the unused argument
with fix rewritten in a more point-full style:
fix' f = (\x y -> f (unroll x x y)) (Roll (\x y -> f (unroll x x y))) omega
Performing the application, (fix' f) becomes (f(fix' f)), and so on.
So, I think I follow how this fixed-point operator works, and it seems reasonable to use _|_ to consume an unused non-strict argument. But I find it mildly disturbing that this unused argument seems to appear out of nowhere.
Then I noticed that rewriting fix without (.) seems to work just as well (modulo non-termination of the GHC inliner), and without the unused
argument:
fix :: (a -> a) -> a fix f = (\x -> f (unroll x x)) (Roll (\x -> f (unroll x x)))
This is another fine way to write it.
Of course, the corollary is that I can introduce as many unused
arguments as I want:
fix'' f = (\x -> (f.).(unroll x x)) (Roll (\x -> (f.).(unroll x x))) omega omega fix''' f = (\x -> ((f.).).(unroll x x)) (Roll (\x -> ((f.).).(unroll x x))) omega omega omega -- etc...
This gave me a new way to approach the question of where the unused argument came from. Given a function (f) of appropriate type, I can
write:
f :: a -> a (f.) :: (b -> a) -> (b -> a) ((f.).) :: (c -> b -> a) -> (c -> b -> a)
And so on. Nothing strange here. But all of these functions can inhabit
the argument type of fix, so:
fix :: (a -> a) -> a fix f :: a fix (f.) :: b -> a fix ((f.).) :: c -> b -> a
Those are some strange types, and I have found those unused arguments without reference to any particular implementation of fix. Thinking about it, (forall a b . b -> a) is no stranger than (forall a . a). Indeed, I think the only thing that can have type (forall a . a) is _|_. Likewise, I can't imagine anything other than the identity function having the type (forall a . a -> a), and it's not too hard to see where (fix id) would lead.
So perhaps it's not the appearance of the unused argument in the above definition of the fixed-point operator, but the type of the fixed-point operator in general that is a bit strange. Certainly, I have always found fix to be mysterious, even though I am getting quite comfortable with using it.
I'm wondering: Is any of this related to the preceding discussion about how fix affects parametricity? Can anyone recommend some (preferably entry-level) readings?

Yitzchak Gale wrote:
I wrote:
Prelude> let f .! g = ((.) $! f) $! g Prelude> let f = undefined :: Int -> IO Int Prelude> f `seq` 42 *** Exception: Prelude.undefined Prelude> ((>>= f) . return) `seq` 42 42 Prelude> ((>>= f) .! return) `seq` 42 42
Duncan Coutts wrote:
Perhaps I'm missing something but I don't see what's wrong.
The monad laws say that (>>= f) . return must be identical to f.
I thought it was: return x >>= f = f x so here the lhs is saturated, so will hit _|_ when the action is executed just as the rhs will. I think the problem you're encountering is just that the above law doesn't imply: (>>= f) . return = f in Haskell because the lhs is of the form \x -> _|_ whereas the rhs, which should really be of the form \x -> _|_, is actually _|_ already(!) so the _|_ has effectively been allowed to jump to the left of the arrow hence the "inequality" detected by seq. Perhaps a solution would be to force _|_ to respect the shape of the type, thus non-terminating values of type a -> b would be given the value \_ -> _|_ instead of _|_ ? Regards, Brian. -- http://www.metamilk.com

Brian Hulley wrote:
Yitzchak Gale wrote:
I wrote:
Prelude> let f = undefined :: Int -> IO Int Prelude> f `seq` 42 *** Exception: Prelude.undefined Prelude> ((>>= f) . return) `seq` 42 42 The monad laws say that (>>= f) . return must be identical to f.
I thought it was:
return x >>= f = f x
so here the lhs is saturated, so will hit _|_ when the action is executed just as the rhs will. I think the problem you're encountering is just that the above law doesn't imply:
(>>= f) . return = f
in Haskell ok so far...
because the lhs is of the form \x -> _|_
No I got this wrong. Evaluating the lhs to WHNF doesn't hit the _|_. (Incidentally the version using .! evaluates to exactly the same thing since (>>= f) `seq` ((>>= f) . return) = ((>>= f) . return) since (\x -> x >>= f) is already in WHNF afaiu) Brian. -- http://www.metamilk.com

Brian Hulley wrote:
Brian Hulley wrote:
Yitzchak Gale wrote:
I wrote:
Prelude> let f = undefined :: Int -> IO Int Prelude> f `seq` 42 *** Exception: Prelude.undefined Prelude> ((>>= f) . return) `seq` 42 42 The monad laws say that (>>= f) . return must be identical to f.
I thought it was:
return x >>= f = f x
so here the lhs is saturated, so will hit _|_ when the action is executed just as the rhs will.
Ooops! But that does not mean the equation holds because for example Prelude> (return 3 >>= f) `seq` 42 42 Prelude> (f 3) `seq` 42 *** Exception: Prelude.undefined In the lhs you only hit _|_ when the composite (>>=) action is actually being executed whereas in the rhs you hit _|_ when computing the function which will return the action to execute so there is difference. Brian. -- http://www.metamilk.com

Can someone explain to me, given that (a) I'm not particularly expert at maths, (b) I'm not particularly expert at Haskell, and (c) I'm a bit fuzzybrained of late: Given that _|_ represents in some sense any computation not representable in and/or not consistent with Haskell, why/how is reasoning about Haskell program behavior in the presence of _|_ *not* like reasoning about logic behavior in the presence of (p^~p)->q? -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

On Jan 23, 2007, at 2:09 PM, Brandon S. Allbery KF8NH wrote:
Can someone explain to me, given that (a) I'm not particularly expert at maths, (b) I'm not particularly expert at Haskell, and (c) I'm a bit fuzzybrained of late:
Given that _|_ represents in some sense any computation not representable in and/or not consistent with Haskell,
I'm not sure you've got quite the right notion of what bottom "means." There are lots of computations that are representable in Haskell that are equivalent to _|_. _|_ is just a name we give to the class of computations that don't act right (terminate).
why/how is reasoning about Haskell program behavior in the presence of _|_ *not* like reasoning about logic behavior in the presence of (p^~p)->q?
You seem to be talking around the edges of the Curry-Howard isomorphism. C-H basically says that there is a correspondence between typed lambda calculi and some logical system. Types correspond to logical formulas and lambda terms correspond to proofs. However, a system like Haskell's (where every type is inhabited) corresponds to an inconsistent logic (one where every well- formed statement is provable). That just means that the logic system corresponding to Haskell's type system isn't a very useful one. However, we don't reason _about_ Haskell using that logic, so its not really a problem. Its possible, however, that I don't understand your question. The formula (p^~p)->q (AKA, proof by contradiction) is valid most classical and constructive logics that I know of, so I'm not quite sure what you're getting at. Rob Dockins Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG

On Jan 23, 2007, at 14:48 , Robert Dockins wrote:
Its possible, however, that I don't understand your question. The formula (p^~p)->q (AKA, proof by contradiction) is valid most classical and constructive logics that I know of, so I'm not quite sure what you're getting at.
I'm not expressing myself well, and I don't have the terminology or enough understanding of things like category theory or formal logic to express myself at all clearly. :( I am modeling _|_ in Haskell as a "failed" computation: either a nonterminating computation, or an impossible operation (e.g. head []). Once you have an actualized _|_ (or a possible one, as when you apply seq to a computation which may be _|_), your entire computation is suspect. I am modeling (p^~p)->q in logic as a "failed" production: once you have produced it, your entire logical production is suspect. It seems to me that the recent discussions about forcing strictness are deliberately introducing the equivalent of (p^~p)->q, and it's not clear to me that you can really reason about the resulting behavior. The recent discussions which have seq (or alternatives) leading to theoretical difficulties seem to bear this out. -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

Brandon S. Allbery KF8NH wrote:
Can someone explain to me, given that (a) I'm not particularly expert at maths, (b) I'm not particularly expert at Haskell, and (c) I'm a bit fuzzybrained of late:
Me too...
Given that _|_ represents in some sense any computation not representable in and/or not consistent with Haskell, why/how is reasoning about Haskell program behavior in the presence of _|_ *not* like reasoning about logic behavior in the presence of (p^~p)->q?
The only catch I see to that POV is that the way `seq` is defined, "undefined `seq` 42" *must* return an error. If this were analogous to "(p^~p)->q", then "undefined `seq` 42" would be allowed to return any value whatsoever. (Imagine an "unsafeSeq" operator, such that "undefined `unsafeSeq` 42" could *either* raise an exception or return 42, and implementations would be free to replace "a `unsafeSeq` b" with "b" when optimizing. Would this be useful?) Can we treat "a `seq` b" as a sort of pragma and not a "real" function? Does Haskell semantics become tractable again if we treat Haskell excluding "seq" (and excluding exceptions? and excluding threads?) as a category, and the exceptions as operating on some kind of meta-level?

On Jan 23, 2007, at 14:58 , Seth Gordon wrote:
The only catch I see to that POV is that the way `seq` is defined, "undefined `seq` 42" *must* return an error. If this were analogous to "(p^~p)->q", then "undefined `seq` 42" would be allowed to return any value whatsoever.
That's not quite what I was trying to say. (p^~p)->q is equivalent to _|_ in the sense that once you derive/compute (respectively) it, the "world" in which it exists breaks. (I don't think formal logic can have a Haskell-like _|_, but deriving (p^~p)->q is close to it in effect.) -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

Hi
That's not quite what I was trying to say. (p^~p)->q is equivalent to _|_ in the sense that once you derive/compute (respectively) it, the "world" in which it exists breaks.
I think thats a bit overly harsh view of _|_ to take. The world does not break once you compute _|_ - a _|_ value doesn't allow you to prove/compute anything you couldn't before. While removing _|_ from the language does make some things nicer to reason about, there aren't many corners where _|_ really gets in the way that much - seq being one of those few corners. Thanks Neil

On Jan 23, 2007, at 15:34 , Neil Mitchell wrote:
prove/compute anything you couldn't before. While removing _|_ from the language does make some things nicer to reason about, there aren't many corners where _|_ really gets in the way that much - seq being one of those few corners.
But that is exactly the problem: `seq` forces _|_ to get into the way, when it normally doesn't. So I'm not clear that trying to fit `seq` into a formalization of Haskell's semantics is the way to go. -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

Hi
prove/compute anything you couldn't before. While removing _|_ from the language does make some things nicer to reason about, there aren't many corners where _|_ really gets in the way that much - seq being one of those few corners.
But that is exactly the problem: `seq` forces _|_ to get into the way, when it normally doesn't. So I'm not clear that trying to fit `seq` into a formalization of Haskell's semantics is the way to go.
Agreed, that was the point I was trying to make :) You seemed to be suggesting _|_ was "evil" (for want of a more precise term) in its behaviour with Haskell. As you seem to say now (and I agree), _|_ is a perfectly useful value, just seq gets in the way. Thanks Neil

On Jan 23, 2007, at 15:50 , Neil Mitchell wrote:
prove/compute anything you couldn't before. While removing _|_ from the language does make some things nicer to reason about, there aren't many corners where _|_ really gets in the way that much - seq being one of those few corners.
But that is exactly the problem: `seq` forces _|_ to get into the way, when it normally doesn't. So I'm not clear that trying to fit `seq` into a formalization of Haskell's semantics is the way to go.
Agreed, that was the point I was trying to make :)
You seemed to be suggesting _|_ was "evil" (for want of a more precise term) in its behaviour with Haskell. As you seem to say now (and I agree), _|_ is a perfectly useful value, just seq gets in the way.
I think at this point I can finally say what I was trying to grasp at (and missing, although I was in the right ballpark at least): It seems to me that the notion of reasoning about Haskell programs in terms of category theory works because category theory is in some sense inherently lazy, whereas (for example) formal logic is inherently strict. So the problem with reasoning about `seq` is that it changes/breaks (depending on how you look at it) the model. Now: either one can come up with a way to model strictness in category theory (compare, for example, modeling I/O in pure functional languages by means of monads), or one can accept that forcing strictness requires reasoning via a different model. Either way, `seq` (or strictness in general) is not really "evil". (And neither is _|_.) -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

Brandon S. Allbery KF8NH wrote:
That's not quite what I was trying to say. (p^~p)->q is equivalent to _|_ in the sense that once you derive/compute (respectively) it, the "world" in which it exists breaks. (I don't think formal logic can have a Haskell-like _|_, but deriving (p^~p)->q is close to it in effect.)
Here's a couple of papers you might like... "Fast and Loose Reasoning is Morally Correct" http://web.comlab.ox.ac.uk/oucl/work/jeremy.gibbons/publications/fast+loose.... "Precise Reasoning About Non-strict Functional Programs How to Chase Bottoms, and How to Ignore Them" http://www.cs.chalmers.se/~nad/publications/danielsson-lic.pdf "Total Functional Programming" http://www.jucs.org/jucs_10_7/total_functional_programming/jucs_10_07_0751_0... Greg Buchholz

Hi Brian, Brian Hulley wrote:
I thought it was: return x >>= f = f x ...I think the problem you're encountering is just that the above law doesn't imply: (>>= f) . return = f
Sorry, I was not clear. For the purposes of this thread, I am using the word "monad" in the category-theoretic sense. The monad axioms are stated in terms of morphisms and composition in the category. If we then define - in the category - for each morphism f, a morphism (>>= f) = join . fmap f then we can derive the identities (>>= return) = id (>>= f) . return = f (>>= g) . (>>= f) = (>>= (>>= g) . f) directly from the monad axioms. Assume, for now, that morphisms in the category will exactly correspond to functions in Haskell. Then the above identities will be the conditions for a monad in Haskell. The conditions for a monad in Haskell are usually stated in the non-points-free form, presumably for clarity. But as you pointed out, that is _not_ equivalent to the above when we are being careful about strictness. My challenge is: 1. Find a way to model strictness/laziness properties of Haskell functions in a category in a way that is reasonably rich. 2. Map monads in that category to Haskell, and see what we get. 3. Compare that to the traditional concept of a monad in Haskell. Is this possible? Any more ideas how to proceed? I told you this was a troll. :) Thanks, Yitz

On Wed, 2007-01-24 at 00:33 +0200, Yitzchak Gale wrote:
My challenge is:
1. Find a way to model strictness/laziness properties of Haskell functions in a category in a way that is reasonably rich.
The reason it's not obvious for categories is because the semantics for Haskell comes from domain theory (CPOs etc) not categories. Domains can describe the issues of strictness in data types quite precisely, including the lifted function space. Duncan

I wrote:
1. Find a way to model strictness/laziness properties of Haskell functions in a category in a way that is reasonably rich.
Duncan Coutts wrote:
The reason it's not obvious for categories is because the semantics for Haskell comes from domain theory (CPOs etc) not categories.
The category we have been discussing seems to do the trick quite easily for (1): Let Empty be the empty type. Let undef = \x -> undefined, and as before, f .! g = f `seq` g `seq` (\x -> f (g x)) Then we can take HaskL - Haskell with Laziness - to be the category Haskell types and Haskell functions (including seq), with .! as composition. It is easy to check that this is a category, using the left-monoid laws for seq: (x `seq` y) `seq` z = x `seq` (y `seq` z) (x `seq` y) `seq` z = (y `seq` x) `seq` z (\x -> expr) `seq` y = y Empty is bi-universal in HaskL, with undef the unique morphism in either direction. We have undef .! f = undef for any three types and any morphism f. We say that a morphism f is strict iff f .! undef = undef. We will only consider functors (F, fmap) on HaskL that are natural, in the sense that fmap is a morphism from (A -> B) to (F(A) -> F(B)) for all A, B. Similarly for return and join with monads. It is easy to show that a functor (F, fmap) preserves strictness if fmap is strict as a morphism. -Yitz

On 2007 January 23 Tuesday 17:33, Yitzchak Gale wrote:
1. Find a way to model strictness/laziness properties of Haskell functions in a category in a way that is reasonably rich. 2. Map monads in that category to Haskell, and see what we get. 3. Compare that to the traditional concept of a monad in Haskell.
Is this possible? Any more ideas how to proceed?
Paul B. Levy's studies of "call-by-push-value" model strictness/laziness using a category theoretic approach. He considers evaluation as an effect, such that if you brought it into Haskell you would expect evaluation to take the form of a monad transformer. There would be difficulty, though, in the same areas which have been discussed in this thread, because his morphisms are functions on something distinct from Haskell values. The corresponding monads could return data in WHNF and thunks of functions, but not functions themselves.

Just for the record, I think this completes the requirements of my challenge. Please comment! Is this correct? Thanks.
1. Find a way to model strictness/laziness properties of Haskell functions in a category in a way that is reasonably rich.
We use HaskL, the category of Haskell types, Haskell functions, and strict composition: f .! g = f `seq` g `seq` (f . g) Let undef = \_ -> undefined. A function f is strict iff f .! undef = undef, lazy iff f .! undef /= undef, and convergent iff f .! g /= undef for all g /= undef. We consider only functors for which fmap is a morphism. A functor preserves strictness iff fmap is strict. A functor preserves laziness iff fmap is convergent. Note that with these definitions, undefined is lazy.
2. Map monads in that category to Haskell, and see what we get.
Assume that return /= undef, and that >>= is convergent in its second argument. The monad laws are: 1. (>>= return) = id 2. (>>= f) . return = f 3. (>>= g) . (>>= f) = (>>= (>>= g) . f) 4. >>= is strict in its second argument.
3. Compare that to the traditional concept of a monad in Haskell.
As long as we are careful to use the points-free version, the laws are the same as the traditional monad laws. In particular, we can use the usual composition for these laws. But we must add the strictness law.

Could seq be changed so that it will not give an error if it finds
undefined? Am I right in thinking that seq is supposed to
theoretically do nothing, but simply give a hint to the compiler so to
speak? If that is true, it should merely attempt to evaluate it, but
ignore it if it cannot evaluate it.
Is it realistic or desirable to change seq like this?
Anyway, as far as I can see it is already true that
(>>= f) . return = f
because 'equality' for Monads simply means they do that same thing
when 'executed' or whatever. The only thing that can currently find a
difference between the above monads is seq and seq is a funny thing.
Aaron
On 2/7/07, Yitzchak Gale
Just for the record, I think this completes the requirements of my challenge. Please comment! Is this correct? Thanks.
1. Find a way to model strictness/laziness properties of Haskell functions in a category in a way that is reasonably rich.
We use HaskL, the category of Haskell types, Haskell functions, and strict composition:
f .! g = f `seq` g `seq` (f . g)
Let undef = \_ -> undefined. A function f is strict iff f .! undef = undef, lazy iff f .! undef /= undef, and convergent iff f .! g /= undef for all g /= undef.
We consider only functors for which fmap is a morphism. A functor preserves strictness iff fmap is strict. A functor preserves laziness iff fmap is convergent.
Note that with these definitions, undefined is lazy.
2. Map monads in that category to Haskell, and see what we get.
Assume that return /= undef, and that >>= is convergent in its second argument.
The monad laws are:
1. (>>= return) = id 2. (>>= f) . return = f 3. (>>= g) . (>>= f) = (>>= (>>= g) . f) 4. >>= is strict in its second argument.
3. Compare that to the traditional concept of a monad in Haskell.
As long as we are careful to use the points-free version, the laws are the same as the traditional monad laws. In particular, we can use the usual composition for these laws. But we must add the strictness law. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Aaron McDaid wrote:
Could seq be changed so that it will not give an error if it finds undefined?
The definition of seq is that seq _|_ x = _|_. That is what it is supposed to do. Actually, the behavior of seq on undefined is very tame - it raises an exception which can be caught. Sometimes seq hangs. Unfortunately, getting around that would require solving the halting problem.
Am I right in thinking that seq is supposed to theoretically do nothing, but simply give a hint to the compiler so to speak?
I see it the other way around. Theoretically, seq does a lot. In practice, it is up to each compiler to decide on the implementation details.
Anyway, as far as I can see it is already true that (>>= f) . return = f because 'equality' for Monads simply means they do that same thing when 'executed' or whatever. The only thing that can currently find a difference between the above monads is seq and seq is a funny thing.
That does seem to be a pretty widespread view. I am a relative newcomer to Haskell, so I guess I am naive. But, so far, I am not convinced. Haskell is a non-strict language, so non-strict values are legitimate, and so is seq. I don't see how seq is any more funny than irrational numbers are irrational or imaginary numbers are imaginary. Regards, Yitz

I think seq is funny because it is not lambda definable. All other constructs in H98 are (except for ! on data types which is defined in terms of seq). -- Lennart On Feb 8, 2007, at 01:12 , Yitzchak Gale wrote:
Haskell is a non-strict language, so non-strict values are legitimate, and so is seq. I don't see how seq is any more funny than irrational numbers are irrational or imaginary numbers are imaginary.
Regards, Yitz _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Lennart Augustsson wrote:
I think seq is funny because it is not lambda definable.
Does the set of computable functions on the natural numbers defined by the lambda calculus augmented with seq have higher Turing degree than the set of classical computable functions? -Yitz

Lennart Augustsson wrote:
I think seq is funny because it is not lambda definable.
I wrote:
Does the set of computable functions on the natural numbers defined by the lambda calculus augmented with seq have higher Turing degree than the set of classical computable functions?
I guess I was not so clear here. What I mean is: Define a lazy lambda calculus, e.g., by restricting beta-reduction suitably. Does this still produce all general recursive functions? Now augment the lazy lambda calculus with seq. What functions does it produce now? I am pretty sure people have done this, but I am not up on the literature. My presumption is that lazy lambda calculus still produces all general recursive functions, and that adding seq makes no difference. In any case, what exactly is bothering you about seq not being a lambda? Regards, Yitz

I'm not sure what you're asking. The (untyped) lambda calculus is Turing complete. How could seq improve that? On Feb 8, 2007, at 11:18 , Yitzchak Gale wrote:
Lennart Augustsson wrote:
I think seq is funny because it is not lambda definable.
Does the set of computable functions on the natural numbers defined by the lambda calculus augmented with seq have higher Turing degree than the set of classical computable functions?
-Yitz

Lennart Augustsson wrote:
I'm not sure what you're asking. The (untyped) lambda calculus is Turing complete. How could seq improve that?
Obviously, it can't. But how can it hurt? Classical lambda calculus does not model the semantics of laziness, so seq is equivalent to flip const there, just like foldl' is equivalent to foldl. If we modify the lambda calculus to model laziness - let's say, by restricting beta-reduction - then the interesting properties of seq are revealed. Why should we treat seq differently in Haskell just because its interesting properties are not modeled in the classical lambda calculus? Haskell is not a classical language, it is non-strict (among other differences). Regards, Yitz

Adding seq ruins eta reduction. For normal order lambda calculus we have '\x.f x = f' (x not free in f). If we add seq this is no longer true. I'm not sure why you bring up lazy evaluation (I presume you mean lazy evaluation as in call-by-need). Having call-by-need or not is unobservable, with or without seq. I'm a fan of eta, it makes reasoning easier. It also means the compiler can do more transformations. -- Lennart On Feb 12, 2007, at 10:22 , Yitzchak Gale wrote:
Lennart Augustsson wrote:
I'm not sure what you're asking. The (untyped) lambda calculus is Turing complete. How could seq improve that?
Obviously, it can't. But how can it hurt?
Classical lambda calculus does not model the semantics of laziness, so seq is equivalent to flip const there, just like foldl' is equivalent to foldl. If we modify the lambda calculus to model laziness - let's say, by restricting beta-reduction - then the interesting properties of seq are revealed.
Why should we treat seq differently in Haskell just because its interesting properties are not modeled in the classical lambda calculus? Haskell is not a classical language, it is non-strict (among other differences).
Regards, Yitz

Adding seq ruins eta reduction. For normal order lambda calculus we have '\x.f x = f' (x not free in f). If we add seq this is no longer true.
isn't that a problem of seq (and evaluation in Haskell generally) not being strict enough (ie, forcing evaluation only to weak head normal form rather than to head normal form)? for instance: seq (\x->_|_ x) () = () =/= _|_ = seq _|_ () but (assuming a variant, seq-hnf, forcing to hnf instead): seq-hnf (\x-> _|_ x ) () = _|_ = seq-hnf _|_ () I don't have my "phone book" here, but didn't Barendregt have a discussion of what kind of normal form would be an appropriate choice for the meaning of lambda terms, with hnf being "good" and whnf or nf being "bad"? reduction to whnf is a pragmatic choice, with a long history, and its own calculus, which is not quite the ordinary lambda calculus. but it's been ages since I learned these things, so I might be misremembering. Claus
I'm a fan of eta, it makes reasoning easier. It also means the compiler can do more transformations.
I also like eta conversion, as well as its variations for non-function types. what they have in common is that the expanded form provides syntactic/structural evidence for properties that are only semantically present in the reduced form. for instance, if we add non-functions to the calculus, eta has to be constrained with type information for f - as the expanded form promises that we have a function, holding more information than the reduced form. variations of eta for non-function types, this allows us to make functions/ contexts less strict (the kind of "borrowing information from the future" so often needed in cyclic programming, or in dealing with other potentially infinite structures): lazy_id_pair x = (fst x,snd x) -- lazy_id_pair _|_ = (_|_,_|_) vs strict_id_pair (a,b) = (a,b) -- strict_id_pair _|_ = _|_ at the expense of not having laws like: x = (fst x,snd x) -- not true in general, even if we know that x::(a,b) x = x >>= return -- not true in general, even if x :: Monad m => m a x = \y-> x y -- not true in general, even if x :: a -> b we still have the inequalities, though - the expanded form being more defined than the reduced form. x :: t ]= (expand-at-t x) :: t -- eta conversion at type t so compilers could use eta-expansion, but not eta-reduction (not without an approximate analysis of an undecidable property). do you happen to have an example in mind where eta-reduction would be beneficial as a compiler transformation, but analysis (to demonstrate that the expanded functional expression terminates successfully) impractical?

No, I can't say off hand if seq-hnf would keep eta valid, either. Neither do I know how to implement seq-hnf efficiently. :) As far as eta for other types, yes, I'll take it if I can get it's easily. But I'm also pretty happy with encoding all the other data types within the lambda calculus (which was how Haskell behaved before seq). -- Lennart On Feb 12, 2007, at 22:05 , Claus Reinke wrote:
Adding seq ruins eta reduction. For normal order lambda calculus we have '\x.f x = f' (x not free in f). If we add seq this is no longer true.
isn't that a problem of seq (and evaluation in Haskell generally) not being strict enough (ie, forcing evaluation only to weak head normal form rather than to head normal form)?
for instance:
seq (\x->_|_ x) () = () =/= _|_ = seq _|_ ()
but (assuming a variant, seq-hnf, forcing to hnf instead): seq-hnf (\x-> _|_ x ) () = _|_ = seq-hnf _|_ ()
I don't have my "phone book" here, but didn't Barendregt have a discussion of what kind of normal form would be an appropriate choice for the meaning of lambda terms, with hnf being "good" and whnf or nf being "bad"? reduction to whnf is a pragmatic choice, with a long history, and its own calculus, which is not quite the ordinary lambda calculus.
but it's been ages since I learned these things, so I might be misremembering.
Claus
I'm a fan of eta, it makes reasoning easier. It also means the compiler can do more transformations.
I also like eta conversion, as well as its variations for non- function types. what they have in common is that the expanded form provides syntactic/structural evidence for properties that are only semantically present in the reduced form. for instance, if we add non-functions to the calculus, eta has to be constrained with type information for f - as the expanded form promises that we have a function, holding more information than the reduced form.
variations of eta for non-function types, this allows us to make functions/ contexts less strict (the kind of "borrowing information from the future" so often needed in cyclic programming, or in dealing with other potentially infinite structures):
lazy_id_pair x = (fst x,snd x) -- lazy_id_pair _|_ = (_|_,_|_)
vs
strict_id_pair (a,b) = (a,b) -- strict_id_pair _|_ = _|_
at the expense of not having laws like: x = (fst x,snd x) -- not true in general, even if we know that x::(a,b)
x = x >>= return -- not true in general, even if x :: Monad m => m a
x = \y-> x y -- not true in general, even if x :: a -> b
we still have the inequalities, though - the expanded form being more defined than the reduced form.
x :: t ]= (expand-at-t x) :: t -- eta conversion at type t
so compilers could use eta-expansion, but not eta-reduction (not without an approximate analysis of an undecidable property). do you happen to have an example in mind where eta-reduction would be beneficial as a compiler transformation, but analysis (to demonstrate that the expanded functional expression terminates successfully) impractical?

Aaron McDaid wrote:
Could seq be changed so that it will not give an error if it finds undefined? Am I right in thinking that seq is supposed to theoretically do nothing, but simply give a hint to the compiler so to speak? If that is true, it should merely attempt to evaluate it, but ignore it if it cannot evaluate it.
⊥ represents any unsuccessful computation, not just the exception that a Haskell implementation throws when it evaluates "undefined". In particular, a program that does not terminate at all can be described as returning ⊥.
participants (15)
-
Aaron McDaid
-
Brandon S. Allbery KF8NH
-
Brian Hulley
-
Claus Reinke
-
Duncan Coutts
-
Greg Buchholz
-
Janis Voigtlaender
-
Lennart Augustsson
-
Matthew Brecknell
-
Neil Mitchell
-
Robert Dockins
-
Scott Turner
-
Seth Gordon
-
Stefan Monnier
-
Yitzchak Gale