Re: [Haskell-cafe] Re: OCaml list sees abysmal Language Shootout results

I though clean was always strict, and that was the major difference between clean and haskell (that and the fact clean is a proprietry language) Keean.

At 03:08 PM 07/10/2004 +0100, MR K P SCHUPKE wrote:
I though clean was always strict, and that was the major difference between clean and haskell (that and the fact clean is a proprietry language)
No - Clean is pure and lazy like Haskell, - the key difference is that it uses uniqueness types rather than monads to ensure the I/O is referentially transparent and safe.
Keean. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
------------------------------------------------------------------------ Dr. Andrew Butterfield (http://www.cs.tcd.ie/Andrew.Butterfield/) Course Director, B.A. (Mod.) Information & Communications Technology Dept.of Computer Science, Trinity College, Dublin University Tel: +353-1-608-2517, Fax: +353-1-677-2204 ------------------------------------------------------------------------

On Thursday 07 Oct 2004 3:29 pm, Andrew Butterfield wrote:
At 03:08 PM 07/10/2004 +0100, MR K P SCHUPKE wrote:
I though clean was always strict, and that was the major difference between clean and haskell (that and the fact clean is a proprietry language)
No - Clean is pure and lazy like Haskell, - the key difference is that it uses uniqueness types rather than monads to ensure the I/O is referentially transparent and safe.
(At the risk of getting way out of my depth:-) I would say another important difference is that the languages semantics are defined directly in terms of graph re-writing, rather than lambda calculus. Not that I really understand anything about formal semantics, but IMO from a programmers perspective this is good thing because it allows programmers better control of sharing and to distinguish between constants (CAFs) and "functions which take no arguments" (both concepts being semantically meaningless in lambda calculus AFAIK). Regards -- Adrian Hey

Andrew Butterfield
I though clean was always strict, and that was the major difference between clean and haskell (that and the fact clean is a proprietry language)
No - Clean is pure and lazy like Haskell,
But it uses explicit strictness annotations a lot, and provides strict and/or unboxed versions of various fundamental types (e.g. tuples), with some implicit coercions. -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

At 03:32 PM 10/8/2004, Marcin Kowalczyk wrote:
Andrew Butterfield
writes: I though clean was always strict, and that was the major difference between clean and haskell (that and the fact clean is a proprietry language)
No - Clean is pure and lazy like Haskell,
But it uses explicit strictness annotations a lot, and provides strict and/or unboxed versions of various fundamental types (e.g. tuples), with some implicit coercions.
It is of course not the language that uses strictness annotations. Clean programs without strictness annotations are perfectly lazy. Clean has a powerful strictness analysis that includes only safe strictness to function arguments. Programmers *can* include strictness annotations, exactly for the reasons that were mentioned in this thread, namely to influence the evaluation strategy in such a way that heap consumption and/or run time decrease. In addition, this can be done light-weight because annotations are added only to function argument types and data types, instead of modifying the code by inserting strict evaluator functions. Regards, Peter Achten N.B. My new email address is : P.Achten@cs.ru.nl. The University of Nijmegen has changed its name to Radboud University Nijmegen

Peter Achten
But it uses explicit strictness annotations a lot, and provides strict and/or unboxed versions of various fundamental types (e.g. tuples), with some implicit coercions.
It is of course not the language that uses strictness annotations.
But the language encourages to use them much more often than in Haskell. They can be declared in types, there is a short syntax for "strict let", and various builtin types have strict variants. These are properties of the language. -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

Marcin 'Qrczak' Kowalczyk writes:
Peter Achten
writes:
It is of course not the language that uses strictness annotations.
But the language encourages to use them much more often than in Haskell. They can be declared in types, there is a short syntax for "strict let", and various builtin types have strict variants. These are properties of the language.
Marcin, are you playing 'kogut', or you want to prove something, and if yes, than what? Clean is not Haskell, but I've been using it as a *lazy* language for years. BTW, both, Haskell and Clean coexist in my head without aggressing each other. Jerzy Karczmarczuk PS. Non-Polish readers would check in the dictionary what 'kogut' means.

On 08/10/2004, at 2:57 PM, karczma wrote:
But the language encourages to use them much more often than in Haskell. They can be declared in types, there is a short syntax for "strict let", and various builtin types have strict variants. These are properties of the language.
Marcin, are you playing 'kogut', or you want to prove something, and if yes, than what?
I believe that Marcin wishes to prove the same point that I want to: namely, Clean encourages use of strictness by making it easier to use (via language annotations). At the risk of sounding ignorant and arrogant, I think the Haskell community in general does not understand the importance of syntactic sugar to make such tasks easier. -- % Andre Pang : trust.in.love.to.save

On 08-Oct-2004, Andre Pang
I believe that Marcin wishes to prove the same point that I want to: namely, Clean encourages use of strictness by making it easier to use (via language annotations). At the risk of sounding ignorant and arrogant, I think the Haskell community in general does not understand the importance of syntactic sugar to make such tasks easier.
I think the Haskell community understands well the importance of syntactic sugar; witness the use of syntactic sugar for monads, the `infix` operator syntax, the layout rules, etc. I think the Haskell community has just been a bit slower in understanding the importance of strictness :) But that seems to be gradually changing. -- Fergus J. Henderson | "I have always known that the pursuit Galois Connections, Inc. | of excellence is a lethal habit" Phone: +1 503 626 6616 | -- the last words of T. S. Garp.

Fergus Henderson comments the comment of Andre Pang concerning my nasty comment addressed to Marcin Kowalczyk.
On 08-Oct-2004, Andre Pang
wrote:
I believe that Marcin wishes to prove the same point that I want to: namely, Clean encourages use of strictness by making it easier to use (via language annotations). At the risk of sounding ignorant and arrogant, I think the Haskell community in general does not understand the importance of syntactic sugar to make such tasks easier.
I think the Haskell community understands well the importance of syntactic sugar; witness the use of syntactic sugar for monads, the `infix` operator syntax, the layout rules, etc.
I think the Haskell community has just been a bit slower in understanding the importance of strictness :) But that seems to be gradually changing.
What are you, both, or all three of you suggesting? That the world of programming is evolving, people invent new, wonderful programming tools, nice and easy, and Haskell remains stagnant, old, conservative, user-unfriendly, whatever? This is nonsense. NOW, I will tell you what is *really* changing! Haskell which began as an academic language, implementing a certain number of paradigms, laziness in particular, and a specific vision of controlled polymorphism, whose ambition was to show the power of computing with abstractions, finally reaches the layers of computing irremediably polluted by the 'mainstream' atmosphere. You want to convert Haskell into a "super-C" language?? OK, I admit that I will never understand these complaints about the inefficiency of non-strict computations, since what I *require* in most of my work is laziness. Had I needed strictness for the sake of efficiency, I would use a different language instead of throwing dirt at Haskell. Why don't you the same? This mythical Haskell community which "slowly begins to understand the role of strictness" is simply an *enlarged* community, with people who want to adapt Haskell to their own, old, conservative, "standard" needs. Those people don't care about the progress in the implementation of types, and about thingies which go beyond the theory of types, such as mathematical attributes with their hierarchies and subsumptions. Actually, they don't care at all about the cost of modifying an existent, well defined language. I think that it would be good to invent some new practical functional languages, which would answer their demands. Any takers? ? Yes, I thought so. On the positive side of such discussions I see the following. * Despite the dirt thrown at Clean because of their distribution policy, you see the importance of another, sibling language, which may be used as a comparison gauge. Concerning strickness: Clean has not only those ubiquitous !annotations, but has a powerful strickness analyzer, which alleviates their use. I have the impression that Haskell strickness analyzer is less aggressive, and I would like to know why, or - perhaps - to hear that this is not true. Clean compiler produces *automatically and globally* human-readable type headers for all the definitions (at least in the main module). I would like to see such a compilation option in Haskell as well; it should be fairly easy and cheap, since all this information is anyhow available. Hm., can hi -ddumping and --show-iface etc. be useful here? I never tested that... * The user-friendliness seem - because of this enlarged community - gaining in importance. We have witnessed the birth of the 'do' syntax, continued by the 'proc' stuff of Ross Paterson, but there is a mileage to go. GHC rewrite rules are still quite weak. * I began to respect more and more all the really new and fascinating ideas in computing paradigms, as far from the superficial quarrels here, as possible. Genericity/polytyping, arrows, co-monads (which don't seem dead yet, although not too many people speak about them). THIS IS HASKELL. If you want Cobol disguised in Haskell, use simply Cobol. Jerzy Karczmarczuk

"karczma"
I think the Haskell community has just been a bit slower in understanding the importance of strictness :)
OK, I admit that I will never understand these complaints about the inefficiency of non-strict computations, since what I *require* in most of my work is laziness.
Indeed, I have to agree. I have spent some time fiddling with programs for the Language Shootout which started this conversation, and by far most of the effort was spent in deliberately making code /slower/ by introducing strictness. For instance, the shootout often requires that a task be carried out N times, to make the timings large enough to measure. In all the naive Haskell implementations of these tasks, Haskell wins by a mile. Why? Because the language quite reasonably says that if you multiply a matrix by itself N times, but only use the result of the last multiplication, well it is jolly well not going to bother computing the first (N-1) identical multiplications - what a waste of time! So is it fair to compare the default lazy Haskell solution with all the eager solutions out there that laboriously do all this unnecessary work? Apparently not, so we have gone to all kinds of trouble to slow the Haskell solution down, make it over-strict, do the work N times, and thereby have a "fair" performance test. Huh. Regards, Malcolm

n Mon, Oct 11, 2004 at 12:22:13PM +0100, Malcolm Wallace wrote:
"karczma"
writes: I think the Haskell community has just been a bit slower in understanding the importance of strictness :)
OK, I admit that I will never understand these complaints about the inefficiency of non-strict computations, since what I *require* in most of my work is laziness.
Indeed, I have to agree. I have spent some time fiddling with programs for the Language Shootout which started this conversation, and by far most of the effort was spent in deliberately making code /slower/ by introducing strictness.
For instance, the shootout often requires that a task be carried out N times, to make the timings large enough to measure. In all the naive Haskell implementations of these tasks, Haskell wins by a mile. Why? Because the language quite reasonably says that if you multiply a matrix by itself N times, but only use the result of the last multiplication, well it is jolly well not going to bother computing the first (N-1) identical multiplications - what a waste of time!
So is it fair to compare the default lazy Haskell solution with all the eager solutions out there that laboriously do all this unnecessary work? Apparently not, so we have gone to all kinds of trouble to slow the Haskell solution down, make it over-strict, do the work N times, and thereby have a "fair" performance test. Huh.
I think the naive way is perfectly fair. If haskell has to live with the disadvantages of lazy evaluation, it only makes sense we should be able to take advantage of the advantages. The fact that haskell doesn't have to compute those intermediate values is a real advantage which should be reflected in the results IMHO. John -- John Meacham - ⑆repetae.net⑆john⑈

On Mon, 11 Oct 2004 14:16:36 -0700, John Meacham
n Mon, Oct 11, 2004 at 12:22:13PM +0100, Malcolm Wallace wrote:
So is it fair to compare the default lazy Haskell solution with all the eager solutions out there that laboriously do all this unnecessary work? Apparently not, so we have gone to all kinds of trouble to slow the Haskell solution down, make it over-strict, do the work N times, and thereby have a "fair" performance test. Huh.
I think the naive way is perfectly fair. If haskell has to live with the disadvantages of lazy evaluation, it only makes sense we should be able to take advantage of the advantages. The fact that haskell doesn't have to compute those intermediate values is a real advantage which should be reflected in the results IMHO. John
This seems especially true if you have to add extra lines of code to make the tests "fair," because this extra code counts against Haskell in the lines-of-code metric. Personally, I am more impressed by the lines-of-code metrics than I am by the performance metrics. - Brian

Malcolm Wallace wrote:
For instance, the shootout often requires that a task be carried out N times, to make the timings large enough to measure. In all the naive Haskell implementations of these tasks, Haskell wins by a mile. Why? Because the language quite reasonably says that if you multiply a matrix by itself N times, but only use the result of the last multiplication, well it is jolly well not going to bother computing the first (N-1) identical multiplications - what a waste of time!
So is it fair to compare the default lazy Haskell solution with all the eager solutions out there that laboriously do all this unnecessary work? Apparently not, so we have gone to all kinds of trouble to slow the Haskell solution down, make it over-strict, do the work N times, and thereby have a "fair" performance test. Huh.
Well, the shootout appears to have two types of tests. Each individual test is labeled as either being implemented in the "Same Way" or doing the "Same Thing". I agree that the "Same Way" test are usually too synthetic and too geared toward measuring artifacts of imperative programming which aren't appropriate in Haskell. But there are also the "Same Thing" tests that are free to be coded in any style at all. And of the "Same Thing" tests, the largest slowdown in the GHC programs is caused by lazy I/O (I think using better I/O routines would fix "Reverse a File" and "Statistical Moments"). To get GHC to come out on top of the CRAPS Scorecard, we need to emphasize the "Same Thing" tests and downplay the "Same Way" tests as well as properly weighting lines of code vs. memory consumption and speed. Here's one way to do it... http://makeashorterlink.com/?O5BD42089 What we probably need to do is create some new tests which aren't as phony and convince the powers-that-be to drop some of the synthetic tests or convert them to "Same Thing" tests (I think the "Sum a Column of Integers" and "Spell Checker" would be good candidates to convert). Just for reference, here's a list of the "Same Thing" tests: Simple TCP/IP Server Matrix Multiplication Statistical Moments Process Instantiation Reverse A File Ring of Messages Count Lines/Words/Chars Word Frequency Greg Buchholz

Jerzy Karczmarczuk writes (in the Haskell cafe):
OK, I admit that I will never understand these complaints about the inefficiency of non-strict computations, since what I *require* in most of my work is laziness. Had I needed strictness for the sake of efficiency, I would use a different language instead of throwing dirt at Haskell.
Sometimes lazy is better, sometimes eager. That's why I want to use both strategies in one language and program. I think complaints about ineffeciency of non-strict computations can be valid. It's nice to write sum = foldl (+) 0 main = print ( sum [1..100000]) but if your program runs out of stack you don't want to switch to another language.
Clean has not only those ubiquitous !annotations, but has a powerful strickness analyzer, which alleviates their use.
Strictness analysis is undecidable in general and there are situations (in real programs) where Clean's strictness analysis falls short. Cheers, Ronny Wichers Schreur

MR K P SCHUPKE wrote:
I though clean was always strict, and that was the major difference between clean and haskell (that and the fact clean is a proprietry language)
Wrong on both accounts. See http://www.cs.ru.nl/~clean/About_Clean/about_clean.html and http://www.cs.ru.nl/~clean/Download/License_Conditions/license_conditions.ht.... Cheers, Ronny Wichers Schreur
participants (13)
-
Adrian Hey
-
Andre Pang
-
Andrew Butterfield
-
Brian Smith
-
Fergus Henderson
-
Greg Buchholz
-
John Meacham
-
karczma
-
Malcolm Wallace
-
Marcin 'Qrczak' Kowalczyk
-
MR K P SCHUPKE
-
Peter Achten
-
Ronny Wichers Schreur