
From: http://en.wikibooks.org/wiki/Haskell/Laziness Given two functions of one parameter, f and g, we say f is stricter than g if f x evaluates x to a deeper level than g x Exercises 1. Which is the stricter function? f x = length [head x] g x = length (tail x) Prelude> let f x = length [head x] Prelude> let g x = length (tail x) Prelude> f undefined 1 Prelude> g undefined *** Exception: Prelude.undefined Prelude> So, g is stricter than f? Wouldn't both functions need to evaluate x to the same level, *thunk* : *thunk* to insure listhood? f x = length [head *thunk* : *thunk*] g x = length (tail *thunk* : *thunk*) Michael

On Sat, Jul 31, 2010 at 4:56 PM, michael rice
From: http://en.wikibooks.org/wiki/Haskell/Laziness
Given two functions of one parameter, f and g, we say f is stricter than g if f x evaluates x to a deeper level than g x
Exercises
1. Which is the stricter function?
f x = length [head x] g x = length (tail x)
Prelude> let f x = length [head x] Prelude> let g x = length (tail x) Prelude> f undefined 1 Prelude> g undefined *** Exception: Prelude.undefined Prelude>
So, g is stricter than f?
Wouldn't both functions need to evaluate x to the same level, *thunk* : *thunk* to insure listhood?
f x = length [head *thunk* : *thunk*] g x = length (tail *thunk* : *thunk*)
Michael
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Notice the two different kinds of brackets being used in f versus g :)

OK, in f, *length* already knows it's argument is a list. In g, *length* doesn't know what's inside the parens, extra evaluation there. So g is already ahead before we get to what's inside the [] and (). But since both still have eval x to *thunk* : *thunk*, g evaluates "to a deeper level?" Michael
Wouldn't both functions need to evaluate x to the same level, *thunk* : *thunk* to insure listhood?
f x = length [head *thunk* : *thunk*] g x = length (tail *thunk* : *thunk*)
Michael
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Notice the two different kinds of brackets being used in f versus g :)
--- On Sat, 7/31/10, Ben Millwood
From: http://en.wikibooks.org/wiki/Haskell/Laziness
Given two functions of one parameter, f and g, we say f is stricter than g if f x evaluates x to a deeper level than g x
Exercises
1. Which is the stricter function?
f x = length [head x] g x = length (tail x)
Prelude> let f x = length [head x] Prelude> let g x = length (tail x) Prelude> f undefined 1 Prelude> g undefined *** Exception: Prelude.undefined Prelude>
So, g is stricter than f?
Wouldn't both functions need to evaluate x to the same level, *thunk* : *thunk* to insure listhood?
f x = length [head *thunk* : *thunk*] g x = length (tail *thunk* : *thunk*)
Michael
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Notice the two different kinds of brackets being used in f versus g :)

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 7/31/10 12:59 , michael rice wrote:
OK, in f, *length* already knows it's argument is a list.
In g, *length* doesn't know what's inside the parens, extra evaluation there. So g is already ahead before we get to what's inside the [] and ().
But since both still have eval x to *thunk* : *thunk*, g evaluates "to a deeper level?"
The whole point of laziness is that f *doesn't* have to eval x. - -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.10 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkxUXbgACgkQIn7hlCsL25X5dQCdFskJ8+DdIVnJtsYVAFJkHcHO yjEAoMuoKU2yXLKVcLFGumLb0IJAVxnx =5KJ5 -----END PGP SIGNATURE-----

On 10-07-31 01:30 PM, Brandon S Allbery KF8NH wrote:
On 7/31/10 12:59 , michael rice wrote:
But since both still have eval x to *thunk* : *thunk*, g evaluates "to a deeper level?"
The whole point of laziness is that f *doesn't* have to eval x.
To elaborate, in computer-friendly syntax: f x = length (red_herring : []) length cares about cons cells (:) and nil [] only. You have already hardcoded exactly those. Enough said... err, enough evaluated.

On Sat, Jul 31, 2010 at 5:59 PM, michael rice
OK, in f, *length* already knows it's argument is a list.
In g, *length* doesn't know what's inside the parens, extra evaluation there. So g is already ahead before we get to what's inside the [] and ().
According to the types, we already know both are lists. The question is, of course, what kind of list.
But since both still have eval x to *thunk* : *thunk*, g evaluates "to a deeper level?"
Michael
I think this question is being quite sneaky. The use of head and tail is pretty much irrelevant. Try the pointfree versions: f = length . (:[]) . head g = length . tail and see if that helps you see why f is lazier than g.

From Hoogle:
Query: (:[])
Error:
unexpected ":"
expecting "#", ",", "forall", "(", "[", "!" or ")"
Bad symbol
Prelude> let h = length . (:[]) . head
Prelude> h undefined
1
Prelude> :t (:[])
(:[]) :: a -> [a]
Prelude> h []
1 <======== this comes as a surprise
Prelude>
Are you saying:
[ head x ] -> [ *thunk* ] and length [ *thunk* ] -> 1, independent of what *thunk* is, even head [], i.e., *thunk* never needs be evaluated?
Michael
--- On Sat, 7/31/10, Ben Millwood
OK, in f, *length* already knows it's argument is a list.
In g, *length* doesn't know what's inside the parens, extra evaluation there. So g is already ahead before we get to what's inside the [] and ().
According to the types, we already know both are lists. The question is, of course, what kind of list.
But since both still have eval x to *thunk* : *thunk*, g evaluates "to a deeper level?"
Michael
I think this question is being quite sneaky. The use of head and tail is pretty much irrelevant. Try the pointfree versions: f = length . (:[]) . head g = length . tail and see if that helps you see why f is lazier than g.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 7/31/10 14:24 , michael rice wrote:
Are you saying:
[ head x ] -> [ *thunk* ] and length [ *thunk* ] -> 1, independent of what *thunk* is, even head [], i.e., *thunk* never needs be evaluated?
Exactly. (I was being cagey because the first response was cagey, possibly suspecting a homework question although it seems like an odd time for it.) length not only does not look inside of the thunk, it *can't* look inside it; all it knows is that it has a list, it specifically does *not* know what that list can hold. So the only thing it can do is count the number of "unknown somethings" in the list. - -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.10 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkxUa54ACgkQIn7hlCsL25XVpgCeIxWwVWhjYQQ86uE2JeJD7mCB mKUAn3WwhrgrYyudv/E8pn5a0HB4gLA9 =H++/ -----END PGP SIGNATURE-----

Subtle stuff.
Thanks, everyone, for your patience. You've been VERY helpful. Great list!
Michael
--- On Sat, 7/31/10, Brandon S Allbery KF8NH
Are you saying:
[ head x ] -> [ *thunk* ] and length [ *thunk* ] -> 1, independent of what *thunk* is, even head [], i.e., *thunk* never needs be evaluated?
Exactly. (I was being cagey because the first response was cagey, possibly suspecting a homework question although it seems like an odd time for it.) length not only does not look inside of the thunk, it *can't* look inside it; all it knows is that it has a list, it specifically does *not* know what that list can hold. So the only thing it can do is count the number of "unknown somethings" in the list. - -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.10 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkxUa54ACgkQIn7hlCsL25XVpgCeIxWwVWhjYQQ86uE2JeJD7mCB mKUAn3WwhrgrYyudv/E8pn5a0HB4gLA9 =H++/ -----END PGP SIGNATURE----- _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Brandon S Allbery KF8NH wrote:
michael rice wrote:
Are you saying:
[ head x ] -> [ *thunk* ] and length [ *thunk* ] -> 1, independent of what *thunk* is, even head [], i.e., *thunk* never needs be evaluated?
Exactly. (I was being cagey because the first response was cagey, possibly suspecting a homework question although it seems like an odd time for it.)
length not only does not look inside of the thunk, it *can't* look inside it; all it knows is that it has a list, it specifically does *not* know what that list can hold. So the only thing it can do is count the number of "unknown somethings" in the list.
Not entirely true: stupidlyStrictLength :: [a] -> Integer stupidlyStrictLength [] = 0 stupidlyStrictLength (x:xs) = x `seq` 1 + stupidlyStrictLength xs Though, of course, if we actually wanted this function we should use an accumulator in order to avoid stack overflow when evaluating the (1+(1+...0)) thunk at the end. -- Live well, ~wren

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 7/31/10 16:58 , wren ng thornton wrote:
Brandon S Allbery KF8NH wrote:
michael rice wrote:
Are you saying:
[ head x ] -> [ *thunk* ] and length [ *thunk* ] -> 1, independent of what *thunk* is, even head [], i.e., *thunk* never needs be evaluated?
Exactly. (I was being cagey because the first response was cagey, possibly suspecting a homework question although it seems like an odd time for it.)
length not only does not look inside of the thunk, it *can't* look inside it; all it knows is that it has a list, it specifically does *not* know what that list can hold. So the only thing it can do is count the number of "unknown somethings" in the list.
Not entirely true:
stupidlyStrictLength :: [a] -> Integer stupidlyStrictLength [] = 0 stupidlyStrictLength (x:xs) = x `seq` 1 + stupidlyStrictLength xs
Given all the messes seq makes ("hey, go behind the compiler's back and touch this arbitrary value of arbitrary type"), I generally consider it to be unsafeSeq :) - -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.10 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkxUlg4ACgkQIn7hlCsL25U41ACgy88GrDKrhfhNn8IiwYPA92qw Kn0AnilNyNJsPZXKIp86NEuWW4ECLVuv =hsLW -----END PGP SIGNATURE-----

On Sat, 31 Jul 2010 17:30:54 -0400, Brandon S Allbery KF8NH
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 7/31/10 16:58 , wren ng thornton wrote:
Brandon S Allbery KF8NH wrote:
michael rice wrote:
Are you saying:
[ head x ] -> [ *thunk* ] and length [ *thunk* ] -> 1, independent of what *thunk* is, even head [], i.e., *thunk* never needs be evaluated?
Exactly. (I was being cagey because the first response was cagey, possibly suspecting a homework question although it seems like an odd time for it.)
length not only does not look inside of the thunk, it *can't* look inside it; all it knows is that it has a list, it specifically does *not* know what that list can hold. So the only thing it can do is count the number of "unknown somethings" in the list.
Not entirely true:
stupidlyStrictLength :: [a] -> Integer stupidlyStrictLength [] = 0 stupidlyStrictLength (x:xs) = x `seq` 1 + stupidlyStrictLength xs
Given all the messes seq makes ("hey, go behind the compiler's back and touch this arbitrary value of arbitrary type"), I generally consider it to be unsafeSeq :)
I would deeply in favor of renaming seq to unsafeSeq, and introduce a type class to reintroduce seq in a disciplined way. -- Nicolas Pouillard http://nicolaspouillard.fr

Nicolas,
I would deeply in favor of renaming seq to unsafeSeq, and introduce a type class to reintroduce seq in a disciplined way.
There is a well-documented [1] trade-off here: Often, calls to seq are introduced late in a developing cycle; typically after you have discovered a space leak and figured out how to resolve it. Then you introduce a call to seq somewhere deep in a call chain. If seq were a class method, you would know have to work your way upward in the call chain to update all type signatures you had written there. Clearly, this is quite tedious. And then, almost just as often, you find out that actually you did not quite figure out how to resolve the space leak, because it's still there. So, you remove your freshly introduced call to seq and, indeed, work your way to the call chain again to remove all now superfluous class constraints from the type signatures. (By the way, this is typically the sort of task, IDEs with refactoring support excel at, but these are unfortunately not ubiquitous for Haskell yet.) More importantly, the type-class approach is flawed [2]. It assumes that all seq-related constraints can be "read off from type variables", which is in fact not the case. Cheers, Stefan [1] Paul Hudak, John Hughes, Simon Peyton Jones, and Philip Wadler. A history of Haskell: Being lazy with class. In Barbara G. Ryder and Brent Hailpern, editors, Proceedings of the Third ACM SIGPLAN History of Programming Languages Conference (HOPL-III), San Diego, California, USA, 9–10 June 2007, pages 1–55. ACM Press, 2007. http://doi.acm.org/10.1145/1238844.1238856 [2] Daniel Seidel and Janis Voigtländer. Taming selective strictness. In Stefan Fischer, Erik Maehle, and Rüdiger Reischuk, editors, INFORMATIK 2009 – Im Focus das Leben, Beiträge der 39. Jahrestagung der Gesellschaft für Informatik e.V. (GI), 28. September – 2. Oktober, in Lübeck, volume 154 of Lecture Notes in Informatics, pages 2916–2930. GI, 2009.

On Sun, Aug 1, 2010 at 2:53 AM, Stefan Holdermans
Nicolas,
I would deeply in favor of renaming seq to unsafeSeq, and introduce a type class to reintroduce seq in a disciplined way.
There is a well-documented [1] trade-off here: Often, calls to seq are introduced late in a developing cycle; typically after you have discovered a space leak and figured out how to resolve it. Then you introduce a call to seq somewhere deep in a call chain. If seq were a class method, you would know have to work your way upward in the call chain to update all type signatures you had written there. Clearly, this is quite tedious. And then, almost just as often, you find out that actually you did not quite figure out how to resolve the space leak, because it's still there. So, you remove your freshly introduced call to seq and, indeed, work your way to the call chain again to remove all now superfluous class constraints from the type signatures. (By the way, this is typically the sort of task, IDEs with refactoring support excel at, but these are unfortunately not ubiquitous for Haskell yet.)
That's a reasonable concern. I suppose that would be the reason for keeping unsafeSeq around if we do typeclassify seq.
More importantly, the type-class approach is flawed [2]. It assumes that all seq-related constraints can be "read off from type variables", which is in fact not the case.
Could you provide an example (or a specific example or explanation in the paper) of what you mean by this? Luke
Cheers,
Stefan
[1] Paul Hudak, John Hughes, Simon Peyton Jones, and Philip Wadler. A history of Haskell: Being lazy with class. In Barbara G. Ryder and Brent Hailpern, editors, Proceedings of the Third ACM SIGPLAN History of Programming Languages Conference (HOPL-III), San Diego, California, USA, 9–10 June 2007, pages 1–55. ACM Press, 2007. http://doi.acm.org/10.1145/1238844.1238856
[2] Daniel Seidel and Janis Voigtländer. Taming selective strictness. In Stefan Fischer, Erik Maehle, and Rüdiger Reischuk, editors, INFORMATIK 2009 – Im Focus das Leben, Beiträge der 39. Jahrestagung der Gesellschaft für Informatik e.V. (GI), 28. September – 2. Oktober, in Lübeck, volume 154 of Lecture Notes in Informatics, pages 2916–2930. GI, 2009._______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sun, 1 Aug 2010 10:53:24 +0200, Stefan Holdermans
Nicolas,
I would deeply in favor of renaming seq to unsafeSeq, and introduce a type class to reintroduce seq in a disciplined way.
There is a well-documented [1] trade-off here: Often, calls to seq are introduced late in a developing cycle; typically after you have discovered a space leak and figured out how to resolve it. Then you introduce a call to seq somewhere deep in a call chain. If seq were a class method, you would know have to work your way upward in the call chain to update all type signatures you had written there. Clearly, this is quite tedious. And then, almost just as often, you find out that actually you did not quite figure out how to resolve the space leak, because it's still there. So, you remove your freshly introduced call to seq and, indeed, work your way to the call chain again to remove all now superfluous class constraints from the type signatures. (By the way, this is typically the sort of task, IDEs with refactoring support excel at, but these are unfortunately not ubiquitous for Haskell yet.)
Only in the case where the type of the value being forced is not known, otherwise the class constraint get silently discarded.
More importantly, the type-class approach is flawed [2]. It assumes that all seq-related constraints can be "read off from type variables", which is in fact not the case.
Indeed they give some account to the type class approach but not enough IMHO. Sure forcing functions is a tricky business and having a generic instance for functions is clearly not the solution (as they explain it with foldl vs foldl'''). However I absolutely do not buy their argument using as example a function f :: Eval (a -> Int) => (a -> Int) -> (a -> Int) -> Int. They consider as an issue the fact that the type does not tell us on which argument seq is used. I think it is fine we may want a more precise type for it to get more properties out of it but it is not flawed. As much as we don't know where (==) is used given a function of type ∀ a. Eq a => [a] -> [a]. Actually my point is that if we do not use any polymorphic primitive to implement a family of seq functions then it cannot be flawed. Indeed if you read type classes as a way to implicitly pass and pick functions then it cannot add troubles. Finally maybe we can simply forbidden the forcing of function (as we do with Eq). The few cases where it does matter will rescue to unsafeSeqFunction. Cheers, -- Nicolas Pouillard http://nicolaspouillard.fr

On Sun, Aug 1, 2010 at 11:29 AM, Nicolas Pouillard
Finally maybe we can simply forbidden the forcing of function (as we do with Eq). The few cases where it does matter will rescue to unsafeSeqFunction.
What's the problem with class Eval a where seq :: a -> t -> t instance Eval b => Eval (a -> b) where seq f = seq (f undefined) It would reduce at least to WHNF as unsafeSeq would. Does it compute more than WHNF? Hmmm, I see, if we had f :: Int -> Int f _ = undefined Then my seq above would diverge while unsafeSeq wouldn't. Perhaps that instance would be a good compromise if we insist in not using 'unsafeSeq' to define it. Cheers, =) -- Felipe.

On Sunday 01 August 2010 10:52:48 am Felipe Lessa wrote:
On Sun, Aug 1, 2010 at 11:29 AM, Nicolas Pouillard
wrote: Finally maybe we can simply forbidden the forcing of function (as we do with Eq). The few cases where it does matter will rescue to unsafeSeqFunction.
What's the problem with
class Eval a where seq :: a -> t -> t
instance Eval b => Eval (a -> b) where seq f = seq (f undefined)
It would reduce at least to WHNF as unsafeSeq would. Does it compute more than WHNF?
Hmmm, I see, if we had
f :: Int -> Int f _ = undefined
Then my seq above would diverge while unsafeSeq wouldn't. Perhaps that instance would be a good compromise if we insist in not using 'unsafeSeq' to define it.
It also diverges for every strict function f. seq id x = seq (id undefined) x = seq undefined x = undefined -- Dan

Nicolas, Luke, SH> More importantly, the type-class approach is flawed [...]. It assumes SH> that all seq-related constraints can be "read off from type variables", SH> which is in fact not the case. LP> Could you provide an example (or a specific example or explanation in LP> the paper) of what you mean by this? I don't remember the exact details, but if I remember correctly the main concern of the authors is that by requiring that all function types are in Eval as well, as an older version of the Report did, applications of seq to values of function type are not reflected in types and hence, types do not convey enough information to state some necessary seq-induced extra conditions for parametricity results. Anyway, I guess Janis and Daniel will do a far better job commenting at this. As an aside, or a shameless plug if you will, in a paper presented at this year's PEPM [*], Jurriaan and I mention that having information about uses of seq explicitly available, particularly in types, could be favourable for strictness analysis as well. NP> Actually my point is that if we do not use any polymorphic primitive to NP> implement a family of seq functions then it cannot be flawed. Indeed NP> if you read type classes as a way to implicitly pass and pick functions NP> then it cannot add troubles. NP> Finally maybe we can simply forbidden the forcing of function (as we do NP> with Eq). The few cases where it does matter will rescue to NP> unsafeSeqFunction. I guess not having functions types in Eval would indeed make parametricity less of an issue, but I do not quite agree that the cases in which one may need seq on values of function type are insignificant. Cheers, Stefan [*] Stefan Holdermans and Jurriaan Hage. Making “stricterness” more relevant. In John P. Gallagher and Janis Voigtländer, editors, Proceedings of the 2010 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation, PEPM 2010, Madrid, Spain, January 18–19, 2010, pages 121–130. ACM Press, 2010. http://people.cs.uu.nl/stefan/pubs/holdermans10making.html

Brandon S Allbery KF8NH
Exactly. (I was being cagey because the first response was cagey, possibly suspecting a homework question although it seems like an odd time for it.)
Why is it an odd time for it? Here in Australia (and presumably other countries in the southern hemisphere) this week will be the second or third week of semester... -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com
participants (13)
-
Albert Y. C. Lai
-
Ben Millwood
-
Brandon S Allbery KF8NH
-
Dan Doel
-
Felipe Lessa
-
Henning Thielemann
-
Ivan Lazar Miljenovic
-
Luke Palmer
-
michael rice
-
Nicolas Pouillard
-
Stefan Holdermans
-
Tillmann Rendel
-
wren ng thornton