
Hello all, I, along with some friends, have been looking to Haskell lately. I'm very happy with Haskell as a language, however, a friend sent me the link: http://shootout.alioth.debian.org/gp4/ which enables you compare several language implementations. Haskell seems to lag behind of Clean.
From what I've seen of Clean it seems almost like Haskell. It even distributes a Haskell->Clean translator so the obvious question is, why is Haskell slower? Being similar languages and being GHC a very good compiler, can't it get at least as fast as Clean?
What am I missing here? (I wrote this mail assuming the results from the URL are trustworthy). Cheers, -- Paulo Jorge Matos - pocm at soton.ac.uk http://www.personal.soton.ac.uk/pocm PhD Student @ ECS University of Southampton, UK

Paulo J. Matos wrote:
Hello all,
I, along with some friends, have been looking to Haskell lately. I'm very happy with Haskell as a language, however, a friend sent me the link: http://shootout.alioth.debian.org/gp4/
which enables you compare several language implementations. Haskell seems to lag behind of Clean.
From what I've seen of Clean it seems almost like Haskell. It even distributes a Haskell->Clean translator so the obvious question is, why is Haskell slower? Being similar languages and being GHC a very good compiler, can't it get at least as fast as Clean?
What am I missing here? (I wrote this mail assuming the results from the URL are trustworthy).
I don't know for certain that this is still the case (and if so why). But I do remember that when I was a Clean user a few years ago both the Clean compiler and the resulting executables were amazingly fast (certainly by FPL standards). I've often thought it's a real shame that two different but very similar languages exist. I think that the Clean compiler would be one of the best if not *the* best Haskell implementations available, apart from minor snag that it isn't Haskell at all :-) As things are at the moment ghc has no serious competition so we don't really know how fast it "should be". Maybe this will change in future. BTW, the reason I still jumped ship in the end and became a Haskell user instead had nothing to do with performance. The reason was that if I was going to invest a lot of time in progs/libs I wanted to have some confidence I'd made the right choice long term and I had issues with the Clean approach to concurrency (what the Clean folk call "deterministic concurrency"). I didn't (and still don't) see this as viable, but during a long and heated flame war on the Clean mailing list it became clear that the Clean team did not agree with my point of view, so things were not likely to change any time soon :-( Regards -- Adrian Hey

I'm curious what experts think too. So far I just guess it is because of clean type system getting better hints for optimizations: * it is easy to mark stuff strict (even in function signatures etc), so it is possible to save on unnecessary CAF creations * uniqueness types allow to do in-place modifications (instead of creating a copy of an object on heap and modifying the copy), so you save GC time and also improve cache hit performance Peter.

Add to that better unbox / box annotations, this may make even bigger difference than the strictness stuff because it allows you to avoid a lot of indirect references do data. Anyway, if Haskell would do some kind of whole program analyzes and transformations it probably can mitigate all the problems to a certain degree. So the slowness of Haskell (compared to Clean) is consequence of its type system. OK, I'll stop, I did not write Clean nor Haskell optimizers or stuff like that :-D Peter. Peter Hercek wrote:
I'm curious what experts think too.
So far I just guess it is because of clean type system getting better hints for optimizations:
* it is easy to mark stuff strict (even in function signatures etc), so it is possible to save on unnecessary CAF creations
* uniqueness types allow to do in-place modifications (instead of creating a copy of an object on heap and modifying the copy), so you save GC time and also improve cache hit performance
Peter.

On 31/10/2007, Peter Hercek
Anyway, if Haskell would do some kind of whole program analyzes and transformations it probably can mitigate all the problems to a certain degree.
I think JHC is supposed to do whole-program optimisations. Rumour has it that its Hello World examples are the fastest around - I have heard it has problems with larger code bases though. ;-) What's the current state of play on this? D.

On 31/10/2007, Peter Hercek
Add to that better unbox / box annotations, this may make even bigger difference than the strictness stuff because it allows you to avoid a lot of indirect references do data.
Anyway, if Haskell would do some kind of whole program analyzes and transformations it probably can mitigate all the problems to a certain degree.
So, I might assert that it is not a problem of the Haskell language itself, it is a problem with the compiler. Which means that with enough effort it would be possible for the compiler to generate compiled code with performance as good as Clean.
So the slowness of Haskell (compared to Clean) is consequence of its type system. OK, I'll stop, I did not write Clean nor Haskell optimizers or stuff like that :-D
type system? Why is that? Shouldn't type system in fact speed up the generated code, since it will know all types at compile time?
Peter.
Peter Hercek wrote:
I'm curious what experts think too.
So far I just guess it is because of clean type system getting better hints for optimizations:
* it is easy to mark stuff strict (even in function signatures etc), so it is possible to save on unnecessary CAF creations
* uniqueness types allow to do in-place modifications (instead of creating a copy of an object on heap and modifying the copy), so you save GC time and also improve cache hit performance
Peter.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Paulo Jorge Matos - pocm at soton.ac.uk http://www.personal.soton.ac.uk/pocm PhD Student @ ECS University of Southampton, UK

Paulo J. Matos wrote:
So the slowness of Haskell (compared to Clean) is consequence of its type system. OK, I'll stop, I did not write Clean nor Haskell optimizers or stuff like that :-D
type system? Why is that? Shouldn't type system in fact speed up the generated code, since it will know all types at compile time?
Yes, but apparently the Clean type system gives more information to the compiler than the Haskell system does. The Haskell type system doesn't say that a certain value can be updated in-place or that a certain value should not be boxed (not counting the GHC extension for unboxed types). Reinier

Paulo J. Matos wrote:
type system? Why is that? Shouldn't type system in fact speed up the generated code, since it will know all types at compile time?
The *existence* of a type system is helpful to the compiler. Peter was referring to the differences between haskell and clean. Specifically, clean's uniqueness types allow for a certain kind of zero-copy mutation optimisation which is much harder for a haskell compiler to automatically infer. It's not clear to me that it's actually worth it, but I think that's the point at issue. I can *imagine* algorithms in which copying is actually faster than mutation, if copying gives you better locality. Jules

On Wed, 31 Oct 2007 14:17:13 +0000
Jules Bean
Specifically, clean's uniqueness types allow for a certain kind of zero-copy mutation optimisation which is much harder for a haskell compiler to automatically infer. It's not clear to me that it's actually worth it, but I think that's the point at issue. I can *imagine* algorithms in which copying is actually faster than mutation, if copying gives you better locality.
If you want in-place update in Haskell, you can use the ST monad, or IORefs. Yes, you have to refactor code, but anecdotally, uniqueness types aren't without problems either - you can make one small change and your code no longer satisfies the uniqueness condition. -- Robin

Robin Green wrote:
On Wed, 31 Oct 2007 14:17:13 +0000 Jules Bean
wrote: Specifically, clean's uniqueness types allow for a certain kind of zero-copy mutation optimisation which is much harder for a haskell compiler to automatically infer. It's not clear to me that it's actually worth it, but I think that's the point at issue. I can *imagine* algorithms in which copying is actually faster than mutation, if copying gives you better locality.
If you want in-place update in Haskell, you can use the ST monad, or IORefs. Yes, you have to refactor code, but anecdotally, uniqueness types aren't without problems either - you can make one small change and your code no longer satisfies the uniqueness condition.
IORefs don't give you in-place update. They give you mutation, but new values are still allocated in new heap. foo <- newIORef "hi" writeIORef foo "bye" -- "bye" is a new string, allocated in new heap. the only thing that got -- mutated was a pointer. STArrays and certain IO Arrays give you in-place update, though. Jules

Are these benchmarks still up-to-date? When I started learning FP, I had to choose between Haskell and Clean, so I made a couple of little programs in both. GHC 6.6.1 with -O was faster in most cases, sometimes a lot faster... I don't have the source code anymore, but it was based on the book "The Haskell road to math & logic". However, the Clean compiler itself is really fast, which is nice, it reminds me to the feeling I had with Turbo Pascal under DOS :-) I find GHC rather slow in compilation. But that is another topic of course. Peter Paulo J. Matos wrote:
Hello all,
I, along with some friends, have been looking to Haskell lately. I'm very happy with Haskell as a language, however, a friend sent me the link: http://shootout.alioth.debian.org/gp4/
which enables you compare several language implementations. Haskell seems to lag behind of Clean.
From what I've seen of Clean it seems almost like Haskell. It even distributes a Haskell->Clean translator so the obvious question is, why is Haskell slower? Being similar languages and being GHC a very good compiler, can't it get at least as fast as Clean?
What am I missing here? (I wrote this mail assuming the results from the URL are trustworthy).
Cheers,

bf3:
Are these benchmarks still up-to-date? When I started learning FP, I had to choose between Haskell and Clean, so I made a couple of little programs in both. GHC 6.6.1 with -O was faster in most cases, sometimes a lot faster... I don't have the source code anymore, but it was based on the book "The Haskell road to math & logic".
Could be in the better Haskell libraries? We only really have the shootout programs, which are very small.
However, the Clean compiler itself is really fast, which is nice, it reminds me to the feeling I had with Turbo Pascal under DOS :-) I find GHC rather slow in compilation. But that is another topic of course.
I find it comforting that GHC thinks so hard about my code. :) -- Don

The site claims it is quite up to date: about Haskell GHC The Glorious Glasgow Haskell Compilation System, version 6.6 Examples are compiled mostly in the middle of this year and at least -O was used. Each test has a log available. They are good at documenting what they do. Peter. Peter Verswyvelen wrote:
Are these benchmarks still up-to-date? When I started learning FP, I had to choose between Haskell and Clean, so I made a couple of little programs in both. GHC 6.6.1 with -O was faster in most cases, sometimes a lot faster... I don't have the source code anymore, but it was based on the book "The Haskell road to math & logic".
However, the Clean compiler itself is really fast, which is nice, it reminds me to the feeling I had with Turbo Pascal under DOS :-) I find GHC rather slow in compilation. But that is another topic of course.
Peter

On 10/31/07, Paulo J. Matos
Hello all,
I, along with some friends, have been looking to Haskell lately. I'm very happy with Haskell as a language, however, a friend sent me the link: http://shootout.alioth.debian.org/gp4/
Careful: it's worse than you think. Many of the solutions to the shootout test are using "imperative" Haskell. "Real" "functional" Haskell performs significantly slower. (Orders of magnitude)
participants (10)
-
Adrian Hey
-
Don Stewart
-
Dougal Stanton
-
Hugh Perkins
-
Jules Bean
-
Paulo J. Matos
-
Peter Hercek
-
Peter Verswyvelen
-
Reinier Lamers
-
Robin Green