What's the deal with Clean?

Recently there has been a lot of discussion on this list about the programming language Clean and converting Clean programs to Haskell. Reading the Wikipedia article on the language, I can't really see any major difference between that and Haskell, except for the monads vs. uniqueness types. So what's the deal with Clean? Why is it preferable to Haskell? Why is it not? -- Deniz Dogan

Deniz Dogan wrote:
Recently there has been a lot of discussion on this list about the programming language Clean and converting Clean programs to Haskell. Reading the Wikipedia article on the language, I can't really see any major difference between that and Haskell, except for the monads vs. uniqueness types.
So what's the deal with Clean? Why is it preferable to Haskell? Why is it not?
As far as I can tell, Clean is to Haskell as C is to Pascal. I.e., Clean is notionally very similar to Haskell, but with lots of added clutter, complexity and general ugliness - but it's probably somehow more machine-efficient as a result. (All of which makes the name "Clean" rather ironic, IMHO.) Of course, this is merely the opinion I formed after performing a cursory scan of some of the introductory documentation. I haven't actually seen any code written with it or anything, so my opinion probably doesn't mean a lot...

On Tue, Nov 3, 2009 at 1:37 PM, Andrew Coppin
Deniz Dogan wrote:
Recently there has been a lot of discussion on this list about the programming language Clean and converting Clean programs to Haskell. Reading the Wikipedia article on the language, I can't really see any major difference between that and Haskell, except for the monads vs. uniqueness types.
So what's the deal with Clean? Why is it preferable to Haskell? Why is it not?
As far as I can tell, Clean is to Haskell as C is to Pascal. I.e., Clean is notionally very similar to Haskell, but with lots of added clutter, complexity and general ugliness - but it's probably somehow more machine-efficient as a result.
(All of which makes the name "Clean" rather ironic, IMHO.)
Of course, this is merely the opinion I formed after performing a cursory scan of some of the introductory documentation. I haven't actually seen any code written with it or anything, so my opinion probably doesn't mean a lot...
It's preferable to Haskell in situations where Haskell isn't the best choice. The criteria for that decision is different from problem to problem. Example: I had to implement a ring buffer, and I wanted the code using it to be written in Haskell. I ended up implementing the buffer in C, and wrapping it in FFI from Haskell because implementing a destructive array in Haskell is kind of unwieldy to someone of my experience level. In Clean, it looks like the uniqueness typing allows for destructive updates in a very controlled manner. Disciplined Disciple might be interesting to look at here too, but i'm not sure I'd deploy anything with DDC just yet :-) Dave
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

David Leimbach wrote:
Disciplined Disciple might be interesting to look at here too, but i'm not sure I'd deploy anything with DDC just yet :-)
Indeed. What DDC needs most at the moment is more people working on it. I've fixed a couple of bugs and I'm working on some others, but there are a large chunk of them in the bug tracker which are simply too deep for me with my current level of knowledge. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/

David Leimbach wrote:
Disciplined Disciple might be interesting to look at here too, but i'm not sure I'd deploy anything with DDC just yet :-) :) Nor would I (and I wrote most of it). I think the approach is right, but the compiler itself is still in the "research prototype" stage.
Ben.

On Tue, Nov 3, 2009 at 3:14 PM, Ben Lippmeier
David Leimbach wrote:
Disciplined Disciple might be interesting to look at here too, but i'm not sure I'd deploy anything with DDC just yet :-)
:) Nor would I (and I wrote most of it). I think the approach is right, but the compiler itself is still in the "research prototype" stage.
Ben.
I have to admit, the first time I hit the wiki page for DDC I said to myself "Self, this sounds crazy complicated". Then I read part of the PDF (your thesis I believe) about Region Types on the bus ride to work and thought. "Gee I think I scared myself off too quickly". Uniqueness typing is quite interesting in Clean, but to control aliasing, like really *control* aliasing, that's just far out man. So I still have to wrap my head around "why this isn't going to get completely out of control" and see why it's all safer than just writing C code but I must say the attention I will be paying to DDC has just gone quite a bit up. Dave

David Leimbach wrote:
I have to admit, the first time I hit the wiki page for DDC I said to myself "Self, this sounds crazy complicated". Then I read part of the PDF (your thesis I believe) about Region Types on the bus ride to work and thought. "Gee I think I scared myself off too quickly".
Uniqueness typing is quite interesting in Clean, but to control aliasing, like really *control* aliasing, that's just far out man.
So I still have to wrap my head around "why this isn't going to get completely out of control" and see why it's all safer than just writing C code but I must say the attention I will be paying to DDC has just gone quite a bit up.
:) A correct C program is just as safe as a correct Haskell/Disciple program. If you're using destructive update then aliasing, side effects and mutability all start to matter. It might look complicated when you reflect all these things in the type system, but you're really just getting a handle on the inherent complications of the underlying program. I suppose the trick is to be able to ignore said complications when you just don't care, or they're not relevant for your particular problem... Ben.

On Tue, Nov 3, 2009 at 10:44 PM, Ben Lippmeier
David Leimbach wrote:
I have to admit, the first time I hit the wiki page for DDC I said to myself "Self, this sounds crazy complicated". Then I read part of the PDF (your thesis I believe) about Region Types on the bus ride to work and thought. "Gee I think I scared myself off too quickly".
Uniqueness typing is quite interesting in Clean, but to control aliasing, like really *control* aliasing, that's just far out man.
So I still have to wrap my head around "why this isn't going to get completely out of control" and see why it's all safer than just writing C code but I must say the attention I will be paying to DDC has just gone quite a bit up.
:) A correct C program is just as safe as a correct Haskell/Disciple program.
Well, of course, the question is in what sort of guarantees a language or compiler provides I guess.
If you're using destructive update then aliasing, side effects and mutability all start to matter. It might look complicated when you reflect all these things in the type system, but you're really just getting a handle on the inherent complications of the underlying program.
So it's just really more notation to let you know which tools are being used when you use them? Does Disciple completely avoid the need for such things as unsafePerformIO? (a perhaps overly paranoid comment but...) I realize we're probably not supposed to worry about the existence of unsafePerformIO, and that library authors "know what they're doing". But doesn't it automatically mean that there's a bit of implicit trust whenever I see a function that's of type (a -> a) that there *isn't* IO going on in there? :-) If Disciple can guarantee that no one is allowed to cheat, is that not a better approach?
I suppose the trick is to be able to ignore said complications when you just don't care, or they're not relevant for your particular problem...
Yes, the Disciple documentation says that this stuff can be inferred, but I don't even let Haskell infer my types for *any* functions I write in any code. I like to restrict what can go in and out of the function even if it's more general. Perhaps this is the knee-jerk reaction of an angry Erlang programmer who really wanted some types to reign in the overly-dynamic evaluations that are allowed in that environment, but that's how I roll baby! I will admit that on occasion I will write and expression that I think does what I want, and look at in in ghci, having it tell me the type because sometimes I've not had enough coffee to do it in my head, but I either look at the inferred type and realize that it's what I originally wanted, or add further restrictions. Dave
Ben.

2009/11/3 Andrew Coppin
As far as I can tell, Clean is to Haskell as C is to Pascal. I.e., Clean is notionally very similar to Haskell, but with lots of added clutter, complexity and general ugliness - but it's probably somehow more machine-efficient as a result.
(All of which makes the name "Clean" rather ironic, IMHO.)
Ouch - you really could put it the other way round. Clean has very little clutter, though I suppose some people might take offence to it having simple macros (:==), but wait so does GHC via -XCPP... I think Clean had generics before Haskell had Data.Generics, otherwise Haskell generally has more innovation, more people work on Haskell, Haskell's motivation was language research... Clean has far fewer libraries, more people use Haskell... Clean used to be considered faster than Haskell, though I don't know what the situation is now: http://www.haskell.org/pipermail/haskell-cafe/2007-October/033854.html Clean is a very fine language, if I wasn't using Haskell I know what my alternative choice would be. Best wishes Stephen

stephen.tetley:
2009/11/3 Andrew Coppin
: As far as I can tell, Clean is to Haskell as C is to Pascal. I.e., Clean is notionally very similar to Haskell, but with lots of added clutter, complexity and general ugliness - but it's probably somehow more machine-efficient as a result.
(All of which makes the name "Clean" rather ironic, IMHO.)
Clean used to be considered faster than Haskell, though I don't know what the situation is now: http://www.haskell.org/pipermail/haskell-cafe/2007-October/033854.html
We've come a long way in 5 years. Haskell is almost always faster on the shootout now. And parallelism goes a long way to helping there: http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=ghc&lang2=clean&box=1 Though this is also true on a single core: http://shootout.alioth.debian.org/u64/benchmark.php?test=all&lang=ghc&lang2=clean&box=1 It's just a lot closer. Clean continues to have a very small memory footprint. -- Don

Stephen Tetley wrote:
2009/11/3 Andrew Coppin
: As far as I can tell, Clean is to Haskell as C is to Pascal. I.e., Clean is notionally very similar to Haskell, but with lots of added clutter, complexity and general ugliness - but it's probably somehow more machine-efficient as a result.
(All of which makes the name "Clean" rather ironic, IMHO.)
Ouch - you really could put it the other way round.
Part of this really comes down to how one feels about the monads vs uniqueness types argument. It's a silly argument to have since the ideas are orthogonal and only really intersect at IO, but there's history there which lead to the current state of things. Sometimes in Haskell I've thought about how uniqueness typing would make something faster, but in general all the plumbing associated with it in Clean makes me feel like I'm writing systems-level code (i.e. C, asm) instead of using a high-level language. The extra plumbing really makes it feel dirtier to work with. That doesn't mean Clean is bad, but I think it does contribute to the "cluttered" feeling Haskellers get. But as I said, it's a silly argument and folks should use whichever gives them warm fuzzies. I also have a vague unnameable distaste whenever working with Python, and rather enjoy working with Perl. Nobody's perfect :) -- Live well, ~wren

G'day all.
Quoting wren ng thornton
Sometimes in Haskell I've thought about how uniqueness typing would make something faster, but in general all the plumbing associated with it in Clean makes me feel like I'm writing systems-level code (i.e. C, asm) instead of using a high-level language. The extra plumbing really makes it feel dirtier to work with. That doesn't mean Clean is bad, but I think it does contribute to the "cluttered" feeling Haskellers get.
I think you're right here. Haskell has developed something of an aversion to naming things that aren't important enough to have a name, such as variables whose only reason to exist is "plumbing". We'd far rather spend effort on more higher-order functions, monads, combinators and points-freeness than name something that's unimportant. And the funny thing about this is that it usually works, because in Haskell, abstraction is cheap. I believe that this is the main reason why Haskell programmers haven't embraced arrows, despite their theoretical advantages: Every notation that has been implemented so far requires names for things that shouldn't need names.
But as I said, it's a silly argument and folks should use whichever gives them warm fuzzies.
I'd like to think that professional developers are a bit more scientific than this. Cheers, Andrew Bromage

So what's the deal with Clean? Why is it preferable to Haskell? Why is it not?
Purely from glancing through the language reference, two things that it looks like clean has that I would love to have in haskell are better records and better arrays. The records don't implement any of the fancy subtyping stuff that the various haskell proposals do, but they have the benefits of a nicer syntax and actually being implemented. Haskell arrays (by which I mean the IArray and MArray interfaces) are, to me at least, really hard to use. From little things like using closed ranges where the rest of the world uses half-open ones and opaque documentation and no consistency between IArray and MArray, to bigger things like how do you insert or delete something. My conclusion, after wrestling with ixmap for 15 minutes, was to convert to a list, concatMap across [(i, a)], convert back to an array, which has the bonus of runtime crashes if you forget an 'i'. Sorry if this turned into a rant about arrays, but it's bothered me for a while :) I think the clean guys got it right when they decided to make good array support an explicit goal. I suppose haskell has since gone a different way with the various array fusion libraries with listlike interfaces, and maybe that's better in the long term. Maybe type families can make a nice interface someday. Meanwhile it's a mess though.

On Nov 4, 2009, at 9:30 AM, Deniz Dogan wrote:
So what's the deal with Clean? Why is it preferable to Haskell? Why is it not?
(1) Speed. (2) If you are a Windows developer, the fact that Windows is the primary platform and others (even Mac OS, which is historically ironic) are second- (or in the case of Solaris) third-class citizens. (3) Did I mention speed? (4) It comes with its own IDE. I don't think it can do anything much that Haskell tools can't do, but if you don't like looking for things, it's a help. (5) Plus of course there's speed. (6) They're working on a Haskell front end, so you won't actually have to choose. (Anyone doing a Clean front end for Haskell?) (7) Haskell now has bang-patterns so you can specify (a bound on) intended strictness when you declare a function. But that's not in Haskell 98. (8) As a result of this, speed is a bit more "declarative" than adding $! in strange places. (9) There's a theorem prover for Clean, called Sparkle. Sadly, it's Windows-only, but we all know what most computers on the planet run, don't we? (It's probably Symbian, actually.) (10) And finally, of course, there's speed. Did I mention that?

So I take it you are saying that it really *cleans* Haskell's clock when it comes to speed? ;-) - Greg On Nov 3, 2009, at 9:04 PM, Richard O'Keefe wrote:
On Nov 4, 2009, at 9:30 AM, Deniz Dogan wrote:
So what's the deal with Clean? Why is it preferable to Haskell? Why is it not?
(1) Speed. (2) If you are a Windows developer, the fact that Windows is the primary platform and others (even Mac OS, which is historically ironic) are second- (or in the case of Solaris) third-class citizens. (3) Did I mention speed? (4) It comes with its own IDE. I don't think it can do anything much that Haskell tools can't do, but if you don't like looking for things, it's a help. (5) Plus of course there's speed. (6) They're working on a Haskell front end, so you won't actually have to choose. (Anyone doing a Clean front end for Haskell?) (7) Haskell now has bang-patterns so you can specify (a bound on) intended strictness when you declare a function. But that's not in Haskell 98. (8) As a result of this, speed is a bit more "declarative" than adding $! in strange places. (9) There's a theorem prover for Clean, called Sparkle. Sadly, it's Windows-only, but we all know what most computers on the planet run, don't we? (It's probably Symbian, actually.) (10) And finally, of course, there's speed. Did I mention that?
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Given the Shootout results, the difference is a matter of a few seconds. If Clean Programmers need those few extra seconds, they're welcome to them. We're Lazy around here. :) /Joe On Nov 4, 2009, at 12:16 AM, Gregory Crosswhite wrote:
So I take it you are saying that it really *cleans* Haskell's clock when it comes to speed? ;-)
- Greg
On Nov 3, 2009, at 9:04 PM, Richard O'Keefe wrote:
On Nov 4, 2009, at 9:30 AM, Deniz Dogan wrote:
So what's the deal with Clean? Why is it preferable to Haskell? Why is it not?
(1) Speed. (2) If you are a Windows developer, the fact that Windows is the primary platform and others (even Mac OS, which is historically ironic) are second- (or in the case of Solaris) third-class citizens. (3) Did I mention speed? (4) It comes with its own IDE. I don't think it can do anything much that Haskell tools can't do, but if you don't like looking for things, it's a help. (5) Plus of course there's speed. (6) They're working on a Haskell front end, so you won't actually have to choose. (Anyone doing a Clean front end for Haskell?) (7) Haskell now has bang-patterns so you can specify (a bound on) intended strictness when you declare a function. But that's not in Haskell 98. (8) As a result of this, speed is a bit more "declarative" than adding $! in strange places. (9) There's a theorem prover for Clean, called Sparkle. Sadly, it's Windows-only, but we all know what most computers on the planet run, don't we? (It's probably Symbian, actually.) (10) And finally, of course, there's speed. Did I mention that?
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

ok:
On Nov 4, 2009, at 9:30 AM, Deniz Dogan wrote:
So what's the deal with Clean? Why is it preferable to Haskell? Why is it not?
(1) Speed.
I'd strongly argue that speed is not a reason to use Clean -- esp. if you have more than one core: http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=ghc&lang2=clean&box=1 Nor a reason to use OCaml, for that matter: http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=ghc&lang2=ocaml&box=1 The Haskell compiler isn't the bottleneck. Use it when performance matters. I do. Reading the OP's statements, though, it seems picking the right library is more of a problem. And widespread expertise in fast Haskell. -- Don

Hello Don,
http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=ghc&lang2=clean&box=1 http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=ghc&lang2=ocaml&box=1
The Haskell compiler isn't the bottleneck. Use it when performance matters. I do.
Don, shootout times may be used to measure how many people was contributed solutions for each language, but nothing more. these tests depends mainly on libraries bundled with each compiler and, secondary, on enthusiasts writing low-level code -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Bulat Ziganshin
http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=ghc&lang2=clean&box=1 http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=ghc&lang2=ocaml&box=1
The Haskell compiler isn't the bottleneck. Use it when performance matters. I do.
Don, shootout times may be used to measure how many people was contributed solutions for each language, but nothing more.
Well, it clearly demonstrates that it is possible to write fast code in Haskell. Last time I looked, much of the shootout code was overly complicated (i.e. "enthusiasts writing low-level code"). And I can't help but notice that Clean beats Haskell on code compactness. It'd be interesting to see how well more naïve/idiomatic code fares. While it's nice to be able to write fast programs, the main reason to use Haskell is to write succinct and correct programs. (Is it possible to have an alternative Haskell "track" in the shootouts?) Since this was done, there has been great strides in available libraries and GHC optimizations, and it'd also be interesting to see whether we now are able to optimize ourselves away from much of the overhead. -k -- If I haven't seen further, it is by standing in the footprints of giants

Hello Ketil, Wednesday, November 4, 2009, 4:31:20 PM, you wrote:
Well, it clearly demonstrates that it is possible to write fast code in Haskell.
my measures says that by psending 3x more time than for C you can optimize haskell code to be only 3x slower than C one
succinct and correct programs. (Is it possible to have an alternative Haskell "track" in the shootouts?)
even w/o enthusiasts Shootout mainly measure speed of libraries
Since this was done, there has been great strides in available libraries and GHC optimizations, and it'd also be interesting to see whether we now are able to optimize ourselves away from much of the overhead.
eh, if it was possible, we have seen this. both on shootout and here when people are crying that their code isn't as fast as those ads say. haskell compilation can't yet automatically avoid laziness and convert pure high-level code into equivalent of C one. libraries doesn't change anything - they provide low-level optimized solutions for particular tasks but can't optimize your own code once you started to write it -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

I personally don´t care about raw performance. Haskell is in the top of the
list of language performance. It has all the ingredients for
improving performance in the coming years: A core language, clear execution
strategy, analysis and parsing, transformations based on math rules. So my
code will improve with each new compiler version at the same or better pace
than any other language. Moreover I can not care less about how fast is C,
when I simply can not program many things I need in C or C++ or Java and in
general any of the language of the performance list that are above... or
below, because they lack the necessary type safety, expressiveness,
abstraction.etc. Not to mention time. Not to mention the growing community
etc.
Regards.
2009/11/4 Bulat Ziganshin
Hello Ketil,
Wednesday, November 4, 2009, 4:31:20 PM, you wrote:
Well, it clearly demonstrates that it is possible to write fast code in Haskell.
my measures says that by psending 3x more time than for C you can optimize haskell code to be only 3x slower than C one
succinct and correct programs. (Is it possible to have an alternative Haskell "track" in the shootouts?)
even w/o enthusiasts Shootout mainly measure speed of libraries
Since this was done, there has been great strides in available libraries and GHC optimizations, and it'd also be interesting to see whether we now are able to optimize ourselves away from much of the overhead.
eh, if it was possible, we have seen this. both on shootout and here when people are crying that their code isn't as fast as those ads say. haskell compilation can't yet automatically avoid laziness and convert pure high-level code into equivalent of C one. libraries doesn't change anything - they provide low-level optimized solutions for particular tasks but can't optimize your own code once you started to write it
-- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hello Alberto, Wednesday, November 4, 2009, 5:58:31 PM, you wrote:
I personally don´t care about raw performance.
me too. actually, i write time-critical parts of my app in c++
Haskell is in the top of the list of language performance.
this list is meaningless, as i said before
It has all the ingredients for improving performance in the coming years:
that's different question. i think that at the last end lazy languages will become as efficient as assembler, it just may happen not in my lifespan :)
I can not care less about how fast is C, when I simply can not program many things I need in C or C++ or Java and in general any of the language of the performance list that are above...
if you don't know how to implement things in C, you cannot do it efficiently in Haskell too. when i write efficient code in Haskell, i actually use Haskell as obscure assembler (the same holds for C)
or below, because they lack the necessary type safety, expressiveness, abstraction.etc.
they also can't make you coffee but it's is different story. i use Haskell too. i just know that it is slow and use other languages when i really need speed. it's why my archiver is world's fastest one :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Bulat Ziganshin
Well, it clearly demonstrates that it is possible to write fast code in Haskell.
my measures says that by psending 3x more time than for C you can optimize haskell code to be only 3x slower than C one
Right¹, the interesting thing is not how fast I can get with N times the effort, but if I can get fast enough with 1/N.
when people are crying that their code isn't as fast as those ads say. haskell compilation can't yet automatically avoid laziness and convert pure high-level code into equivalent of C one.
Many of those people are making fairly simple mistakes. I think a somewhat seasoned programmer using good libraries can write declarative, concise, and readable code that still is reasonably fast. -k ¹) At least for some approximation of the word. Only one benchmark on the shootout has C at a 3x advantage. -- If I haven't seen further, it is by standing in the footprints of giants

Hello Ketil, Wednesday, November 4, 2009, 7:43:38 PM, you wrote:
Right?, the interesting thing is not how fast I can get with N times the effort, but if I can get fast enough with 1/N.
it depends entirely on how fast you need. so it's again changing the topic - while i say that haskell is slow compared to other languages, i don't say that it is slow for you or that you need sped at all. why it's repeated again and again? why you don't write to Don what you don't need speed when he wrote that haslkell is fast but wrote this to me? :(
when people are crying that their code isn't as fast as those ads say. haskell compilation can't yet automatically avoid laziness and convert pure high-level code into equivalent of C one.
Many of those people are making fairly simple mistakes. I think a somewhat seasoned programmer using good libraries can write declarative, concise, and readable code that still is reasonably fast.
i don't think that omitting strictness declarations is a mistake :)
?) At least for some approximation of the word. Only one benchmark on the shootout has C at a 3x advantage.
oh, can we stop saying about shootout? if you want to see speed of pure haskell code, look at papers about fast arrays/strings - their authors have measured that lazy lists are hundreds times slower than idiomatic C code. is use of lazy lists counted as mistake too and paper authors had too small haskell experience? -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

bulat.ziganshin:
oh, can we stop saying about shootout? if you want to see speed of pure haskell code, look at papers about fast arrays/strings - their authors have measured that lazy lists are hundreds times slower than idiomatic C code. is use of lazy lists counted as mistake too and paper authors had too small haskell experience?
Comparing apples against oranges is a mistake, yes. -- Don

A comparison of the evolution in speed of Haskell from version to version
rather than with other languages could have been very informative about the
progress in haskell speed. I think that the progression has been
astonishing.
I though that while seeing this language shootout in windows
http://dada.perl.it/shootout/
where ghc performance is rather mean. Until I realized that the version used
was GHC 5.04.2 http://www.haskell.org/
2009/11/4 Don Stewart
bulat.ziganshin:
oh, can we stop saying about shootout? if you want to see speed of pure haskell code, look at papers about fast arrays/strings - their authors have measured that lazy lists are hundreds times slower than idiomatic C code. is use of lazy lists counted as mistake too and paper authors had too small haskell experience?
Comparing apples against oranges is a mistake, yes.
-- Don _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 05/11/2009, at 04:01, Bulat Ziganshin wrote:
oh, can we stop saying about shootout? if you want to see speed of pure haskell code, look at papers about fast arrays/strings - their authors have measured that lazy lists are hundreds times slower than idiomatic C code. is use of lazy lists counted as mistake too and paper authors had too small haskell experience?
In the papers I coauthored, I don't think we measured any such thing. What we measured was that in algorithms that are best implemented with (unboxed) arrays, using boxed lists is going to cost you. That's not a very surprising conclusion and it's by no means specific to Haskell. The problem was/is the lack of nice purely declarative array libraries but that changing, albeit slowly. It's a question of using the right data structure for the algorithm, not a C vs. Haskell thing. Roman

Bulat Ziganshin
Right, the interesting thing is not how fast I can get with N times the effort, but if I can get fast enough with 1/N.
- while i say that haskell is slow compared to other languages,
The "other languages" being C and C++. I believe Haskell is fast compared to Ruby, Python or Lisp.
i don't say that it is slow for you or that you need sped at all. why it's repeated again and again? why you don't write to Don what you don't need speed when he wrote that haslkell is fast
I'm not saying I don't need speed. I need programs that are fast *enough*, and I would like to build them with a minimum of effort on my part. Which is why I would like to see how fast GHC can make more idiomatic Haskell code.
but wrote this to me? :(
Because I am trying to agree with you. :-)
Many of those people are making fairly simple mistakes. I think a somewhat seasoned programmer using good libraries can write declarative, concise, and readable code that still is reasonably fast.
i don't think that omitting strictness declarations is a mistake :)
If your program is slow because you misunderstand how laziness affects execution, and if it's easily fixed by adding a bit of strictness, then, yes, that is the kind of mistake I'm talking about.
At least for some approximation of the word. Only one benchmark on the shootout has C at a 3x advantage.
oh, can we stop saying about shootout?
I'm using the numbers that are available, you are free to suggest better benchmarks.
if you want to see speed of pure haskell code, look at papers about fast arrays/strings - their authors have measured that lazy lists are hundreds times slower than idiomatic C code. is use of lazy lists counted as mistake too
Yes!¹ You don't have to program a lot of Haskell to realize that for performance sensitive data, you need to use ByteStrings or some other packed data structure, not lazy lists of Chars. Many of those complaining on poor performance are advised to use ByteStrings instead, and more often than not, it helps tremendously. Again, I'm not contradicting your claim that C will usually be faster, or that writing very fast programs in Haskell takes a lot of effort. I'm saying that programs have to be fast *enough* and correct *enough*, and when those goals are achieved, the question is how we can achieve them with a minimum of effort, and with a maximum of clarity. -k ¹) Probably not a mistake in those papers, that depends on what the authors wanted to illustrate. But clearly a mistake in a performance sensitive program, and clearly a mistake if you want to argue that Haskell is too slow to use for real programs. -- If I haven't seen further, it is by standing in the footprints of giants

Hello Ketil, Thursday, November 5, 2009, 12:00:44 PM, you wrote:
- while i say that haskell is slow compared to other languages,
The "other languages" being C and C++. I believe Haskell is fast compared to Ruby, Python or Lisp.
of course
i don't say that it is slow for you or that you need sped at all. why it's repeated again and again? why you don't write to Don what you don't need speed when he wrote that haslkell is fast
I'm not saying I don't need speed. I need programs that are fast *enough*, and I would like to build them with a minimum of effort on my part.
well, why you don't write this to Don? what i said is just what ghc generates much slower code that gcc. there is nothing in my letter about your haskell usage, so why it's all?
Which is why I would like to see how fast GHC can make more idiomatic Haskell code.
i think it would be more natural to write this to someone who thinks that haskell is fast based on measurements on non-idiomatic code rather than to me
but wrote this to me? :(
Because I am trying to agree with you. :-)
really? :/
Many of those people are making fairly simple mistakes. I think a somewhat seasoned programmer using good libraries can write declarative, concise, and readable code that still is reasonably fast.
i don't think that omitting strictness declarations is a mistake :)
If your program is slow because you misunderstand how laziness affects execution, and if it's easily fixed by adding a bit of strictness, then, yes, that is the kind of mistake I'm talking about.
for me, analyzing of slow program and finding where to add "bit of strictness" isn't something easy. it's whole optimization work and finally you got with need of strict lists or strict arrays
At least for some approximation of the word. Only one benchmark on the shootout has C at a 3x advantage.
oh, can we stop saying about shootout?
I'm using the numbers that are available, you are free to suggest better benchmarks.
it's here:
if you want to see speed of pure haskell code, look at papers about fast arrays/strings - their authors have measured that lazy lists are hundreds times slower than idiomatic C code. is use of lazy lists counted as mistake too
Yes!? You don't have to program a lot of Haskell to realize that for performance sensitive data, you need to use ByteStrings or some other packed data structure, not lazy lists of Chars. Many of those complaining on poor performance are advised to use ByteStrings instead, and more often than not, it helps tremendously.
the problems is what it's string-only and doesn't speed up even all programs using strings (for example, we have hand-made BS.readInt operation since it cannot be made efficient w/o low-level coding). another example - afaik, there is no BS version of ParseC what demonstrates substantial speed increase
Again, I'm not contradicting your claim that C will usually be faster, or that writing very fast programs in Haskell takes a lot of effort. I'm saying that programs have to be fast *enough* and correct *enough*, and when those goals are achieved, the question is how we can achieve them with a minimum of effort, and with a maximum of clarity.
it's obvious. why i don't write to everyone complaining about lack of some haskell features that i don't need it?
?) Probably not a mistake in those papers, that depends on what the authors wanted to illustrate. But clearly a mistake in a performance sensitive program, and clearly a mistake if you want to argue that Haskell is too slow to use for real programs.
i don't say that haskell is slow for real programs. i say that idiomatic haskell is much slower than idiomatic C and these papers demonstrates how much, for simple computation-intensive loops -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

I used to think, was I clueless !, that if the compiler was good enough, then I'd perform magic for me; like transforming that list-based quicksort in a unboxed, mutable array based one, automatically, if that made sense. Like I said, I was clueless. Since then, I understood that while modern GHC compilers are awesome, they don't compete with Gandalf yet. I think that's in a way what's Bulat is saying : for Haskell to really compete with C in *his view*, if I understand it, the compiler has to be able to take idiomatic Haskell code, and translate it in idomatic C code or better. Or said another way, we have to be able to write things like SDL, jpeg or mpeg processing in Haskell, instead of writing bindings to C libraries, without losing on performance. In short, maybe Bulat wishes to be able to write the time-critical parts of his archiver, in Haskell, without resorting to low-level hacking. Then he'd be happy with Haskell speed ?

Hello David, Thursday, November 5, 2009, 12:49:38 PM, you wrote:
I think that's in a way what's Bulat is saying : for Haskell to really compete with C in *his view*, if I understand it, the compiler has to be able to take idiomatic Haskell code, and translate it in idomatic C code or better.
In short, maybe Bulat wishes to be able to write the time-critical parts of his archiver, in Haskell, without resorting to low-level hacking. Then he'd be happy with Haskell speed ?
i'm happy with haskell speed since i don't expect that it may be used for time-critical parts (btw, i mostly use C libraries written by other people). what i mean is that other people shouldn't expect that Haskell may compete with C based on meaningless shootout numbers or rare successful benchmarks published here (unsuccessful benchmarks are just not published, i hope it's obvious?). the same people that are proving that haskell is as fast as C when they want, will prove that Haskell is much slower when they need opposite conclusion. don't believe ads - check it yourself with code you actually write -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Nov 5, 2009, at 1:49 AM, David Virebayre wrote:
I think that's in a way what's Bulat is saying : for Haskell to really compete with C in *his view*, if I understand it, the compiler has to be able to take idiomatic Haskell code, and translate it in idomatic C code or better.
Or said another way, we have to be able to write things like SDL, jpeg or mpeg processing in Haskell, instead of writing bindings to C libraries, without losing on performance.
And this is confusing to those of us who are not compiler experts. Haskell knows when I have a list of Doubles, you know, because it's strongly typed. Then it proceeds to box them. Huh ? The laziness thing has many example od _reducing_ efficiency, but there seems to be a real lack of example where it helps. In fact it seems to _always_ hurt. People sure seem excited about it. Me, not so excited. I've asked this question before and the answer, apparently, is polymorphism. Maybe I'm oversimplifying :-) Maybe the folks at MLTON will add type classes to ML ;-) Brian

brian
Haskell knows when I have a list of Doubles, you know, because it's strongly typed.
Right - so you can do foo, bar :: [Double] foo = 1:2:3:foo bar = [1,2,3,undefined,4,5,6]
Then it proceeds to box them. Huh ?
Otherwise, the above wouldn't work. For lists of known constants, they could be unboxed and packed into an array. But that means you'd have to have two different types of lists, and if you really want an unboxed array, why not use one explicitly?
The laziness thing has many example of _reducing_ efficiency, but there seems to be a real lack of example where it helps. In fact it seems to _always_ hurt. People sure seem excited about it. Me, not so excited.
Well - it allows me to easily process files larger than memory. Many of my programs tend to follow a pattern like (build operation based on command line switches etc) writeFile of =<< map operation . readFile if Lazy bytestrings is my current favorite, since it reads bytes in an efficient, packed format, presents a list-like interface, and is chunkwise lazy, so streaming can be done in constant time. The "build operation" part often ends up a bit gross, but I have a plan for that which I hope to come back to later on.
I've asked this question before and the answer, apparently, is polymorphism.
Really? -k -- If I haven't seen further, it is by standing in the footprints of giants

On Thu, 2009-11-05 at 15:57 +0100, Ketil Malde wrote:
Lazy bytestrings is my current favorite, since it reads bytes in an efficient, packed format, presents a list-like interface, and is chunkwise lazy, so streaming can be done in constant time.
The "build operation" part often ends up a bit gross, but I have a plan for that which I hope to come back to later on.
Yes, they're not good for construction atm. The Builder monoid from the binary package is pretty good though. I've considered pulling it out and putting it in the bytestring package. Duncan

Duncan Coutts
The "build operation" part often ends up a bit gross, but I have a plan for that which I hope to come back to later on.
Yes, they're not good for construction atm. The Builder monoid from the
I've used that a bit, but that's not what I'm talking about. Serializing and deserializing to/from bytestrings is sometimes complicated, but usually straightforward enough. The operation in between is what is giving me headaches. Basically, I have a bunch of command line options - filter on this, modify on that, produce some kind of output to some file - that must be interpreted in order to produce a combined filter/modifier processing. The pipeline looks something like this: readFoo :: IO [Foo] process :: [Foo] -> IO [Foo] writeFoo :: [Foo] -> IO () The hard part is how to elegantly construct the "process" part. If it were just filtering or modifications, it could be a pure function. The complexity comes from sometimes needing to split off some output to some file. Currently, I'm opening handles in advance, and processing one Foo at a time, writing it to the correct handles, and finally closing handles when done. This is a rather pedestrian approach. I'm now considering defining branch :: ([Foo] -> IO ()) -> [Foo] -> IO [Foo] branch iop fs = do forkIO (iop fs) return fs Which, if I understand correctly, would allow me to write process = filterM this >>= mapM that >>= branch (writeBar . map foo2bar) >>= filterM theother So - is this a good way to approach it? I feel slightly queasy about using concurrency for this, although I think it'll work well in practice. It is very important that this is lazy - the list of Foos can be larger than available memory, so one risk is that one thread might run off with the list of Foos with the other trailing far behind, leading to increased memory use. Previous experience seems to indicate that the 'head' thread will be slowed by disk/memory and allow the trailing threads to keep up. I do have a nagging feeling that this could be solved more elegantly with arrows or a lazy writer monad, or something else that I don't know enough about. I'd be happy to hear any suggestions. -k PS: I probably need to catch the threadID, and wait for all threads to finish as well. This is left as an excercise for the writer. :-) -- If I haven't seen further, it is by standing in the footprints of giants

Hello brian, Thursday, November 5, 2009, 5:46:51 PM, you wrote:
The laziness thing has many example od _reducing_ efficiency, but there seems to be a real lack of example where it helps.
main = interact id -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Thu, Nov 5, 2009 at 6:46 AM, brian
On Nov 5, 2009, at 1:49 AM, David Virebayre wrote:
I think that's in a way what's Bulat is saying : for Haskell to really
compete with C in *his view*, if I understand it, the compiler has to be able to take idiomatic Haskell code, and translate it in idomatic C code or better.
Or said another way, we have to be able to write things like SDL, jpeg or mpeg processing in Haskell, instead of writing bindings to C libraries, without losing on performance.
And this is confusing to those of us who are not compiler experts.
Haskell knows when I have a list of Doubles, you know, because it's strongly typed.
Then it proceeds to box them. Huh ?
Imagine a computation which will yield a Double if evaluated, but has not yet been evaluated. How do you store that in the list?
The laziness thing has many example od _reducing_ efficiency, but there seems to be a real lack of example where it helps. In fact it seems to _always_ hurt. People sure seem excited about it. Me, not so excited.
I've asked this question before and the answer, apparently, is polymorphism.
I can't really think of how laziness and polymorphism are related. For me the big win with laziness is composability. Laziness allows us to express things in ways that are more natural. The prelude function 'take' is a perfect example. It allows you to use finite portions of infinite lists. You could then express an infinite series very naturally and then decouple from that the logic to process finite parts. The implication here is that laziness allows you to use data structures for control flow. This all works together to enable separation of concerns. Which is generally a very good thing if you want to reason about your source code. Laziness can also be thought of as a transformation on the time complexity of algorithms. Sure, the worst-case complexity still remains but often you can get a better average case by only computing as much as you need. I hope that helps, Jason

On Nov 5, 2009, at 8:26 AM, Jason Dagit wrote:
I can't really think of how laziness and polymorphism are related. For me the big win with laziness is composability. Laziness allows us to express things in ways that are more natural. The prelude function 'take' is a perfect example. It allows you to use finite portions of infinite lists. You could then express an infinite series very naturally and then decouple from that the logic to process finite parts. The implication here is that laziness allows you to use data structures for control flow. This all works together to enable separation of concerns. Which is generally a very good thing if you want to reason about your source code.
My bad, I meant polymorphism as the answer as to why things are boxed.
Laziness can also be thought of as a transformation on the time complexity of algorithms. Sure, the worst-case complexity still remains but often you can get a better average case by only computing as much as you need.
I hope that helps,
It does.

On Nov 5, 2009, at 8:26 AM, Jason Dagit wrote:
Haskell knows when I have a list of Doubles, you know, because it's strongly typed.
Then it proceeds to box them. Huh ?
Imagine a computation which will yield a Double if evaluated, but has not yet been evaluated. How do you store that in the list?
So laziness is causing the boxing to be necessary ?

On Thu, Nov 5, 2009 at 6:15 PM, brian
On Nov 5, 2009, at 8:26 AM, Jason Dagit wrote:
Haskell knows when I have a list of Doubles, you know, because it's strongly typed.
Then it proceeds to box them. Huh ?
Imagine a computation which will yield a Double if evaluated, but has not yet been evaluated. How do you store that in the list?
So laziness is causing the boxing to be necessary ?
"Necessary" is a strong word within formal/mathematical communities. If you mean it in that sense, then I'm not sure it's necessary. My (incomplete) understanding is that no one has a better way than boxing that has as wide applicability as boxing. Perhaps there are techniques that work better. My guess is that they are either 1) special cases; or 2) have yet to be discovered. I wonder if perhaps supercompilation or perhaps whole program optimizations will eventually be able to eliminate much of the boxing we have today. Strictness analysis has done a lot to remove boxing but it is not perfect due to the halting problem. Jason

And this is confusing to those of us who are not compiler experts.
Haskell knows when I have a list of Doubles, you know, because it's strongly typed.
Then it proceeds to box them. Huh ?
The laziness thing has many example od _reducing_ efficiency, but there seems to be a real lack of example where it helps. In fact it seems to _always_ hurt. People sure seem excited about it. Me, not so excited.
I have a program that involves a somewhat involved "compilation" pipeline. As a result of laziness, it's all done incrementally and I can start getting results right away, which is essential to my application. Without laziness I'd have to do something awkward and complicated like break it into a lot of message passing threads or process in chunks (and it's impossible to know how much context each chunk will need without further hackery). I can abort the computation cleanly by simply stopping the consumer, and everything gets GCed. And I get all sorts of convenient idioms like 'zip [0..]' and calculating stuff in 'where' that I may not need. And it's just fun. So I'm still excited about it :)

On Wed, Nov 4, 2009 at 10:34 AM, Richard O'Keefe
(4) It comes with its own IDE. I don't think it can do anything much that Haskell tools can't do, but if you don't like looking for things, it's a help.
And a well-integrated GUI toolkit. If it weren't for the Windows bias I'd have definitely taken the time to learn the language. martin

On Nov 4, 2009, at 9:56 PM, Martin DeMello wrote [about Clean]:
And a well-integrated GUI toolkit. If it weren't for the Windows bias I'd have definitely taken the time to learn the language.
The GUI toolkit was originally available on the Mac. But now, ah, now! "The Object I/O Library 1.2 is currently only available for the Wintel platform. " The last time I tried Clean on a SPARC, using even the old GUI toolkit required you to locate and install a graphics library (XView) that Sun abandoned a _long_ time ago. They have some _really_ interesting ideas about building web sites using generics. I paid for the Intel C compiler. I'd pay for Clean, if only I _could_ use it.

Martin DeMello wrote:
On Wed, Nov 4, 2009 at 10:34 AM, Richard O'Keefe
wrote: (4) It comes with its own IDE. I don't think it can do anything much that Haskell tools can't do, but if you don't like looking for things, it's a help.
And a well-integrated GUI toolkit. If it weren't for the Windows bias I'd have definitely taken the time to learn the language.
I'm dissapointed that Haskell doesn't have *more* of a Windows bias. It _is_ the platform used by 90% of the desktop computers, after all. (As unfortunate as that undeniably is...) In particular, I really wish we could make is so that stuff from Hackage actually compiles on Windows. (Disclaimer: Stuff written in Haskell compiles just fine. It's FFI bindings that unanimously refuse to compile.) It's also somewhat irritating that the I/O libraries have a few quirks apparently related to mingw32. But hey, that one at least should be fixable...

Andrew Coppin wrote:
I'm dissapointed that Haskell doesn't have *more* of a Windows bias. It _is_ the platform used by 90% of the desktop computers, after all. (As unfortunate as that undeniably is...)
That is not true in my home and its not true where I work. In addition, saying "90% of all desktop computers" is misleading; instead we should be talking about the computers of software developers and there, the figure is almost certainly well below 90%.
In particular, I really wish we could make is so that stuff from Hackage actually compiles on Windows. (Disclaimer: Stuff written in Haskell compiles just fine. It's FFI bindings that unanimously refuse to compile.) It's also somewhat irritating that the I/O libraries have a few quirks apparently related to mingw32. But hey, that one at least should be fixable...
The problem here is that window is the odd one out. Stuff written for userspace Linux will usually compile with little more than minor alterations on OSX and all the other Unix-like systems. Making that same code build on windows can be a significant amount of work and that work should not be the responsibility of the people who write code on Linux and Mac. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/

Erik de Castro Lopo wrote:
Andrew Coppin wrote:
I'm dissapointed that Haskell doesn't have *more* of a Windows bias. It _is_ the platform used by 90% of the desktop computers, after all. (As unfortunate as that undeniably is...)
That is not true in my home and its not true where I work.
In addition, saying "90% of all desktop computers" is misleading; instead we should be talking about the computers of software developers and there, the figure is almost certainly well below 90%.
Depends what you develop. I know of plenty of developers who use MS Visual Studio for everything, for example. You can pretend that Windows isn't popular and thus there's no need to support it, but to me that seems like a fairly unrealistic point of view.
The problem here is that window is the odd one out.
This, it seems, is why there are programs in this world that are designed for Windows but (sometimes) also run on Unix, and other programs which are designed for Unix but (sometimes) also run on Windows. I would like to add that GHC itself is really rather well-behaved under Windows. Many Unix programs simply get recompiled under Cygwin or something, resulting in a program that *runs* on Windows, but doesn't follow any Windows-related conventions and so forth. GHC actually behaves very well under Windows. And we have some quite nice library support for writing Haskell programs which compile unmodified under Windows and Linux. (E.g., filepath, ansi-terminal, etc.) I'm just saying, we could still do better...

Andrew Coppin wrote:
Depends what you develop. I know of plenty of developers who use MS Visual Studio for everything, for example.
And those developers do not care whether Haskell libraries compile on windows or not.
You can pretend that Windows isn't popular and thus there's no need to support it, but to me that seems like a fairly unrealistic point of view.
Windows is popular amongst general computer users, but less popular amongst developers and less popular still amongst Haskell developers.
The problem here is that window is the odd one out.
This, it seems, is why there are programs in this world that are designed for Windows but (sometimes) also run on Unix,
This is a very small proportion of all windows applications and many/ most of them run via Wine, a code base of 1.8 million lines with a worth according to Ohloh of $29Mil (which I consider very conservative): http://www.ohloh.net/p/wine
and other programs which are designed for Unix but (sometimes) also run on Windows.
A far larger proportion of Unix program run on windows, due to two factors: - Unix people write command line apps and libraries which are always easier to port to windows. - Unix people doing things like windows backends to GTK+ and QT to make GUI applications portable to windows.
I'm just saying, we could still do better...
My point is that its up to people who care about windows to fix things for windows. I am a user of Debian and Ubuntu. About a year ago I became sick and tired of the poor state of haskell in Debian. I didn't complain, I joined the debian-haskell-maintainers group and started packaging haskell stuff for Debian. Now, the situation has improved vastly. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/

2009/11/5 Erik de Castro Lopo
Andrew Coppin wrote:
I'm dissapointed that Haskell doesn't have *more* of a Windows bias. It _is_ the platform used by 90% of the desktop computers, after all. (As unfortunate as that undeniably is...)
That is not true in my home and its not true where I work.
In addition, saying "90% of all desktop computers" is misleading; instead we should be talking about the computers of software developers and there, the figure is almost certainly well below 90%.
Why? After all, software is always (in one way or another) written for users, not other software developers. -- Deniz Dogan

Deniz Dogan wrote:
2009/11/5 Erik de Castro Lopo
: In addition, saying "90% of all desktop computers" is misleading; instead we should be talking about the computers of software developers and there, the figure is almost certainly well below 90%.
Why? After all, software is always (in one way or another) written for users, not other software developers.
We're talking about Haskell libraries. The *only* people who are interested in Haskell libraries are Haskell developers (ie not even developers in general). Counting desktop users who use nothing more than Word, Excel and IE are not interested in Haskell libraries. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/

Am Donnerstag 05 November 2009 23:02:30 schrieb Erik de Castro Lopo:
Andrew Coppin wrote:
I'm dissapointed that Haskell doesn't have *more* of a Windows bias. It _is_ the platform used by 90% of the desktop computers, after all. (As unfortunate as that undeniably is...)
That is not true in my home and its not true where I work.
Neither is it true in the group of people I know. However, the number of computers in that group which had Windows installed when they were bought may be close 90% - it's close to impossible to buy a completely assembled, ready to go computer without here - which is easily remedied by inserting an openSuse or Ubuntu disk as soon as it's connected to power :)
In addition, saying "90% of all desktop computers" is misleading; instead we should be talking about the computers of software developers and there, the figure is almost certainly well below 90%.
In particular, I really wish we could make is so that stuff from Hackage actually compiles on Windows. (Disclaimer: Stuff written in Haskell compiles just fine. It's FFI bindings that unanimously refuse to compile.) It's also somewhat irritating that the I/O libraries have a few quirks apparently related to mingw32. But hey, that one at least should be fixable...
The problem here is that window is the odd one out.
Still it would be nice if things were easily installable on Windows, so maybe some Windows user should write a tool which makes installing C libraries on windows feasible.
Stuff written for userspace Linux will usually compile with little more than minor alterations on OSX and all the other Unix-like systems. Making that same code build on windows can be a significant amount of work and that work should not be the responsibility of the people who write code on Linux and Mac.
Erik

One of this differences between Haskell and Clean I did not see mentioned in this discussion is that Clean does not allow so-called partial parametrisation. I.e. all function calls have to be fully saturated. Although the GHC can sometimes (often) find out that a call is saturated, this becomes more complicated if higher-order functions are involved. As a consequence extra tests have to be performed at each call, which partially explains the speed difference. I have no idea how much difference it would make if such tests could completely be avoided in Haskell implementations. We hope to to be able to say something more about this difference in the future, based on the global (GRIN-based) analysis done in the Utrecht Haskell Compiler. Doaitse On 3 nov 2009, at 21:30, Deniz Dogan wrote:
Recently there has been a lot of discussion on this list about the programming language Clean and converting Clean programs to Haskell. Reading the Wikipedia article on the language, I can't really see any major difference between that and Haskell, except for the monads vs. uniqueness types.
So what's the deal with Clean? Why is it preferable to Haskell? Why is it not?
-- Deniz Dogan _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Doaitse Swierstra wrote:
One of this differences between Haskell and Clean I did not see mentioned in this discussion is that Clean does not allow so-called partial parametrisation. I.e. all function calls have to be fully saturated
I don't understand what you mean. Can you give an example ? Kind regards, John van Groningen

Doaitse Swierstra wrote:
One of this differences between Haskell and Clean I did not see mentioned in
John van Groningen
does not allow so-called partial parametrisation. I.e. all function calls have to be fully saturated
I don't understand what you mean. Can you give an example ?
Kind regards,
John van Groningen
I think the idea was that Clean doesn't support a syntax like "map (**2)" for a function that will take a list and square its elements. The call to map there is not fully saturated, since it's waiting for another argument. (As a disclaimer, I've not used Clean, so I could be speaking nonsense; it's just how I read the original statement.)

L Spice
Doaitse Swierstra wrote:
One of this differences between Haskell and Clean I did not see mentioned in this discussion is that Clean does not allow so-called partial parametrisation. I.e. all function calls have to be fully saturated
I don't understand what you mean. Can you give an example ?
I think the idea was that Clean doesn't support a syntax like "map (**2)"
This terminology is new to me, I would normally call that "partial application". Googling "partial parametrization" gives me some papers¹ that appear to use this term as a synonym. I'm surprised that (if) Clean doesn't support it. -k ¹ http://cat.inist.fr/?aModele=afficheN&cpsidt=5420616 and http://www.springerlink.com/content/wg64116566522061/ -- If I haven't seen further, it is by standing in the footprints of giants

One of this differences between Haskell and Clean I did not see mentioned in this discussion is that Clean does not allow so-called partial parametrisation. I.e. all function calls have to be fully saturated
I think there may be a misunderstanding here. Beware: I haven't used Clean in a while. (1) Clean didn't have sections. This is no big deal. Clean does have "flip" in StdFunc. (x +) => (+) x (+ y) => (flip (+)) y (2) Clean requires saturated *DEFINITIONS*. If you declare f :: A B C -> D then each rule you give for f must have exactly three arguments. If you declare f :: A -> B -> C -> D then each rule you give for f must have exactly one argument. See section 3.7 of the Clean 2.1 language report. This has no consequences for how you can *apply* such a function. Section 3.7.1 of the report is explicit: In CLEAN all symbols (functions and constructors) are defined with fixed arity. However, in an application it is of course allowed to apply them to an arbitrary number of arguments. A curried application of a function is an application of a function with a number of arguments which is less than its arity (note that in CLEAN the arity of a function can be derived from its type). With the aid of the predefined internal function _AP a curried function applied on the required number of arguments is transformed into an equivalent uncurried function application. The type axiom's (sic.) of the CLEAN type system include for all s defined with arity n the equivalence of s::(t1->(t2->(...(tn->tr)...)) with s::t1 t2 ... tn -> tr.

Why speak nonsense when you can test it?
// ----------------------------------------------------------------------------
module nonsense
import StdEnv
nonsense = map ((^) 2)
Start = nonsense [1,2,3]
// ----------------------------------------------------------------------------
.... Running gives:
[2,4,8]
Best wishes
Stephen
2009/11/8 L Spice
John van Groningen
writes: One of this differences between Haskell and Clean I did not see mentioned in
Doaitse Swierstra wrote: this discussion is that Clean does not allow so-called partial parametrisation. I.e. all function calls have to be fully saturated
I don't understand what you mean. Can you give an example ?
Kind regards,
John van Groningen
I think the idea was that Clean doesn't support a syntax like "map (**2)" for a function that will take a list and square its elements. The call to map there is not fully saturated, since it's waiting for another argument.
(As a disclaimer, I've not used Clean, so I could be speaking nonsense; it's just how I read the original statement.)
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

My impression is the saturated-ness that Doaitse speaks of is covered in Urban Boquist's phd thesis on the GRIN intermediate language - circa page 31. http://www.cs.chalmers.se/~boquist/phd/ As per the code snippet above Clean handles partial application entirely adequately. Best wishes Stephen
One of this differences between Haskell and Clean I did not see mentioned in
Doaitse Swierstra wrote: this discussion is that Clean does not allow so-called partial parametrisation. I.e. all function calls have to be fully saturated

Stephen Tetley schrieb:
Why speak nonsense when you can test it?
// ----------------------------------------------------------------------------
module nonsense
import StdEnv
nonsense = map ((^) 2)
Start = nonsense [1,2,3]
// ----------------------------------------------------------------------------
.... Running gives:
[2,4,8]
I think they wanted square numbers, not powers of two.
participants (27)
-
ajb@spamcop.net
-
Alberto G. Corona
-
Andrew Coppin
-
Ben Lippmeier
-
brian
-
Bulat Ziganshin
-
Daniel Fischer
-
David Leimbach
-
David Virebayre
-
Deniz Dogan
-
Don Stewart
-
Duncan Coutts
-
Erik de Castro Lopo
-
Evan Laforge
-
Gregory Crosswhite
-
Henning Thielemann
-
Jason Dagit
-
Joe Fredette
-
John van Groningen
-
Ketil Malde
-
L Spice
-
Martin DeMello
-
Richard O'Keefe
-
Roman Leshchinskiy
-
S. Doaitse Swierstra
-
Stephen Tetley
-
wren ng thornton