What *not* to use Haskell for

Hi everyone So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?" Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;) Cheers, Dave

Dave Tapley wrote:
Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
GHC's scheduler lacks any hard timeliness guarantees. Thus it's quite hard to use haskell in realtime or even soft-realtime environments. This is probably not a fundamental problem with haskell. It's a problem with the compiler/RTS which we happen to be using. It may be true that it's harder to write an RTS with realtime guarantees but I doubt it's impossible. Jules

Jules Bean
GHC's scheduler lacks any hard timeliness guarantees.
This is probably not a fundamental problem with haskell. It's a problem with the compiler/RTS which we happen to be using.
Actually, I would say it is much worse than that. It is not merely a question of implementation. We do not have _any_ predictable theory of resource usage (time, memory) for a lazy language. There is no analysis (yet) which can look at an arbitrary piece of Haskell code and tell you how long it will take to execute, or how much heap/stack it will eat. What is more, it is very hard to do that in a modular way. The execution time of lazy code is entirely dependent on its usage/demand context. So you can't just apply WCET to single functions, then combine the results. It's a question I'd love to be able to solve, but I note that those who are currently working on predictable execution of functional languages (e.g. the Hume project) have already ditched laziness in favour of eager execution. Regards, Malcolm

Malcolm Wallace wrote:
Jules Bean
wrote: GHC's scheduler lacks any hard timeliness guarantees.
This is probably not a fundamental problem with haskell. It's a problem with the compiler/RTS which we happen to be using.
Actually, I would say it is much worse than that. It is not merely a question of implementation. We do not have _any_ predictable theory of resource usage (time, memory) for a lazy language. There is no analysis (yet) which can look at an arbitrary piece of Haskell code and tell you how long it will take to execute, or how much heap/stack it will eat. What is more, it is very hard to do that in a modular way. The execution time of lazy code is entirely dependent on its usage/demand context. So you can't just apply WCET to single functions, then combine the results.
That's true but I'm not sure you need to solve that (hard, interesting) problem just to get *some* kind of timeliness guarantees. For example the guarantee that a thread is woken up within 10us of the MVar it was sleeping on being filled doesn't require you to solve the whole problem. It requires you to be able to bound GC time, or preempt the GC, but that's feasible isn't it? Then there is the possibility of a strict DSL (probably but not necessarily a Monad) within haskell which has strong timeliness guarantees. Jules

On Tue, Nov 11, 2008 at 5:18 AM, Jules Bean
Dave Tapley wrote:
Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
GHC's scheduler lacks any hard timeliness guarantees.
Thus it's quite hard to use haskell in realtime or even soft-realtime environments.
Actually, Haskell is an excellent language for hard real-time applications. At Eaton we're using it for automotive powertrain control. Of course, the RTS is not running in the loop. Instead, we write in a DSL, which generates C code for our vehicle ECU. Thanks to this great language, we traded 100K lines of Simulink for 3K lines of Haskell. Our current program is planned to hit the production lines in a few months. With similar methods, Haskell is also a great language for ASIC and FPGA design. -Tom

G'day all.
Quoting Tom Hawkins
Actually, Haskell is an excellent language for hard real-time applications. At Eaton we're using it for automotive powertrain control. Of course, the RTS is not running in the loop. Instead, we write in a DSL, which generates C code for our vehicle ECU.
Bingo! And thanks for someone for admitting that they do this. :-) "Hard real-time applications" is a huge area, and not all of the code that you write is code that ends up running on the target. Generally, in DSL/MDD-style development, not very much of the code that you write ends up running on the target. In some cases, _none_ of the code you write ends up running on the target. Haskell is almost ideal for tasks like this. Cheers, Andrew Bromage

People have been admitting to using Haskell like that for quite a while now.
I think it's an excellent use of Haskell as a DSEL host.
-- Lennart
On Thu, Nov 13, 2008 at 12:40 AM,
G'day all.
Quoting Tom Hawkins
: Actually, Haskell is an excellent language for hard real-time applications. At Eaton we're using it for automotive powertrain control. Of course, the RTS is not running in the loop. Instead, we write in a DSL, which generates C code for our vehicle ECU.
Bingo! And thanks for someone for admitting that they do this. :-) "Hard real-time applications" is a huge area, and not all of the code that you write is code that ends up running on the target.
Generally, in DSL/MDD-style development, not very much of the code that you write ends up running on the target. In some cases, _none_ of the code you write ends up running on the target. Haskell is almost ideal for tasks like this.
Cheers, Andrew Bromage _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I think Haskell is not nice to write general purpouse libraries that could be easily and completly wrapped by other languages. You can wrap gtk, sqlite3, gsl, opengl etc., but you can't write python bindings for Data.Graph. But, then, if you claim there's nothing else Haskell can't do, what do you need those bindings for ? :) Best, Maurício
Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
Cheers,
Dave

Hello Mauricio, Tuesday, November 11, 2008, 2:26:21 PM, you wrote: imho, Haskell isn't worse here than any other compiled language - C++, ML, Eiffel and beter tnan Java or C#.every language has its own object model and GC. the only ay is to provide C-typed interfaces between languages (or use COM, IDL and other API-describing languages)
I think Haskell is not nice to write general purpouse libraries that could be easily and completly wrapped by other languages. You can wrap gtk, sqlite3, gsl, opengl etc., but you can't write python bindings for Data.Graph.
But, then, if you claim there's nothing else Haskell can't do, what do you need those bindings for ? :)
Best, Mauricio
Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
Cheers,
Dave
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Actually, one language you mention there *is* worse than the others
for writing wrappable library code: C++. Admittedly, they've got a
Python interface now via boost, but the main problem with writing
wrappable C++ code is the template system and the inheritence systems
getting in the way. Symbol names aren't predictable and not
standardized, so it becomes impossible to write a portable system for
finding and binding to functions in a library. I've not yet found a
good way to do it in FFI code, and I would love to, as one library in
particular I hold near and dear -- OpenSceneGraph -- is entirely
written in C++.
-- Jeff
On Tue, Nov 11, 2008 at 6:35 AM, Bulat Ziganshin
Hello Mauricio,
Tuesday, November 11, 2008, 2:26:21 PM, you wrote:
imho, Haskell isn't worse here than any other compiled language - C++, ML, Eiffel and beter tnan Java or C#.every language has its own object model and GC. the only ay is to provide C-typed interfaces between languages (or use COM, IDL and other API-describing languages)
I think Haskell is not nice to write general purpouse libraries that could be easily and completly wrapped by other languages. You can wrap gtk, sqlite3, gsl, opengl etc., but you can't write python bindings for Data.Graph.
But, then, if you claim there's nothing else Haskell can't do, what do you need those bindings for ? :)
Best, Mauricio
Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
Cheers,
Dave
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- I try to take things like a crow; war and chaos don't always ruin a picnic, they just mean you have to be careful what you swallow. -- Jessica Edwards

On Tue, 11 Nov 2008, Jefferson Heard wrote:
Actually, one language you mention there *is* worse than the others for writing wrappable library code: C++. Admittedly, they've got a Python interface now via boost, but the main problem with writing wrappable C++ code is the template system and the inheritence systems getting in the way. Symbol names aren't predictable and not standardized, so it becomes impossible to write a portable system for finding and binding to functions in a library. I've not yet found a good way to do it in FFI code, and I would love to, as one library in particular I hold near and dear -- OpenSceneGraph -- is entirely written in C++.
SWIG helps wrapping C++ libraries by providing C wrappers to C++ functions. However, as far as I know, templates cannot be wrapped as they are, but only instances of templates. Thus there is no wrapper to STL.

On Tue, Nov 11, 2008 at 14:46, Henning Thielemann
SWIG helps wrapping C++ libraries by providing C wrappers to C++ functions. However, as far as I know, templates cannot be wrapped as they are, but only instances of templates. Thus there is no wrapper to STL.
Maybe my understanding is a bit off, but isn't this to be expected? There's no way to compile a generic template to machine code, as template instantiation happens at source level in C++. cheers, Arnar

Hello Jefferson, Tuesday, November 11, 2008, 4:12:40 PM, you wrote: may be i doesn't understand something but why c#, java, delphi, visual basic, perl, python, ruby or even ml better than c++? symbol names in C++ are easily predictable with wrapper using extern "C". i think that you just not tried to write warppers to code in other languages - the same problems are everywhere
Actually, one language you mention there *is* worse than the others for writing wrappable library code: C++. Admittedly, they've got a Python interface now via boost, but the main problem with writing wrappable C++ code is the template system and the inheritence systems getting in the way. Symbol names aren't predictable and not standardized, so it becomes impossible to write a portable system for finding and binding to functions in a library. I've not yet found a good way to do it in FFI code, and I would love to, as one library in particular I hold near and dear -- OpenSceneGraph -- is entirely written in C++.
-- Jeff
On Tue, Nov 11, 2008 at 6:35 AM, Bulat Ziganshin
wrote: Hello Mauricio,
Tuesday, November 11, 2008, 2:26:21 PM, you wrote:
imho, Haskell isn't worse here than any other compiled language - C++, ML, Eiffel and beter tnan Java or C#.every language has its own object model and GC. the only ay is to provide C-typed interfaces between languages (or use COM, IDL and other API-describing languages)
I think Haskell is not nice to write general purpouse libraries that could be easily and completly wrapped by other languages. You can wrap gtk, sqlite3, gsl, opengl etc., but you can't write python bindings for Data.Graph.
But, then, if you claim there's nothing else Haskell can't do, what do you need those bindings for ? :)
Best, Mauricio
Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
Cheers,
Dave
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Hi all,
On Tue, Nov 11, 2008 at 11:38, Dave Tapley
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
I would not use Haskell if I were faced with the prospect of producing a huge system in short time (i.e. meaning I couldn't do it by myself) and all I had was a pool of "regular" programmers that have been trained in .NET, Java, C++, Python, <your favorite imperative language>. Yes, sad - but true. I'd very much like to see this twisted to an argument for Haskell :) cheers, Arnar

You can hire one Haskell programmer instead of 1,2,3... programmers in
your favorite imperative language.
On Tue, Nov 11, 2008 at 12:32 PM, Arnar Birgisson
Hi all,
On Tue, Nov 11, 2008 at 11:38, Dave Tapley
wrote: Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
I would not use Haskell if I were faced with the prospect of producing a huge system in short time (i.e. meaning I couldn't do it by myself) and all I had was a pool of "regular" programmers that have been trained in .NET, Java, C++, Python, <your favorite imperative language>.
Yes, sad - but true. I'd very much like to see this twisted to an argument for Haskell :)
cheers, Arnar _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Tue, Nov 11, 2008 at 14:37, Krasimir Angelov
You can hire one Haskell programmer instead of 1,2,3... programmers in your favorite imperative language.
That's assuming I can find a Haskell programmer in the first place. Also, he/she has to be good to replace 10 other programmers. cheers, Arnar

--- On Tue, 11/11/08, Dave Tapley
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
'Hard' real-time applications? I don't know that there couldn't be a 'real-time friendly' Haskell, but for some applications, the unpredictability (however slight) of when, for example, garbage collection happens, or how long it takes, is unacceptable. (Even the unpredictability of heap allocation/deallocation a la malloc/free is unacceptable for some real time apps). Haskell is in the same boat here with lots of other languages, of course.

On Tue, Nov 11, 2008 at 2:38 AM, Dave Tapley
Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
Cheers,
Dave
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
I think some would disagree with me, but I would advise against using haskell for a task that necessarily requires a lot of mutable state and IO and for which serious performance is a big factor. I'm not talking about stuff that can be approximated by zippers and whatnot, but rather situations where IORefs abound and data has identity. Haskell can quite capably do mutable state and IO, but if your task is all mutable state and IO, I'd lean toward a language that makes it easier (OCaml, perhaps). Also, I think there are some tasks which are more easily coded using an OO approach, and while this can be done in Haskell, I tend not to think it is worth the effort. I've worked multiple projects in which big hierarchies of business objects were used, and it had to be easily to add new subclasses with minor variation and to treat any as their parent. Considered by many in FP community to be bad style, but I've never seen the equivalent implemented in any other way effectively. Haskell's record system gets in the way, as does the (as perceived by me) esotericism of existential types. Oh, also, any task that requires a good hash table. :-P Mind you, Haskell is my first choice for a leisure project and I think it is an excellent choice for quite a few tasks and still capable of what I list above, just not in my opinion the best choice.

consalus:
On Tue, Nov 11, 2008 at 2:38 AM, Dave Tapley
wrote: Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
Cheers,
Dave
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
I think some would disagree with me, but I would advise against using haskell for a task that necessarily requires a lot of mutable state and IO and for which serious performance is a big factor. I'm not talking about stuff that can be approximated by zippers and whatnot, but rather situations where IORefs abound and data has identity. Haskell can quite capably do mutable state and IO, but if your task is all mutable state and IO, I'd lean toward a language that makes it easier (OCaml, perhaps).
Do you have an example of a mutable state/ IO bound application, like, hmm, a window manager or a revision control system or a file system...? -- Don

Hello Don, Wednesday, November 12, 2008, 12:51:10 AM, you wrote:
Do you have an example of a mutable state/ IO bound application, like, hmm, a window manager or a revision control system or a file system...?
not I/O, but IO :) btw, i use C++ for speed-critical code (compression & encryprion) and Haskell/GHC for everything else in my FreeArc project (it has 19k downloads ATM). i also plan to switch to C# for GUI part since gtk2hs provides less features, less documented, les debugged, doesn't have IDE and my partner doesnt know Haskell :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Tue, Nov 11, 2008 at 1:51 PM, Don Stewart
consalus:
On Tue, Nov 11, 2008 at 2:38 AM, Dave Tapley
wrote: Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
Cheers,
Dave
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
I think some would disagree with me, but I would advise against using haskell for a task that necessarily requires a lot of mutable state and IO and for which serious performance is a big factor. I'm not talking about stuff that can be approximated by zippers and whatnot, but rather situations where IORefs abound and data has identity. Haskell can quite capably do mutable state and IO, but if your task is all mutable state and IO, I'd lean toward a language that makes it easier (OCaml, perhaps).
Do you have an example of a mutable state/ IO bound application, like, hmm, a window manager or a revision control system or a file system...?
-- Don
Of course, with a lot of skill, good design, and a pile of language extensions state/io-heavy Haskell code can be clean and flexible. Performance can be pretty good, and for complex algorithmic code arguably a better choice than most other languages. Still, neither of the projects you reference (to my knowledge) have a mutation-heavy inner computation loop. XMonad does all of its mutation in a custom monad that is ReaderT StateT IO or something similar, and it apparently works beautifully. However, my understanding is that stack of monad transformers tend not to be particularly efficient, and while that usually isn't an issue, the case that I'm talking about is that where mutation performance is a major concern. Other languages offer similar expressive power, minus the joys of laziness and referential transparency. Persistent data structures are great, but if you're not using the persistence it is less convenient and less efficient. So again, Haskell _can_ do mutation and IO just fine, but if laziness, purity, and immutability will be the rare exception rather than the rule, might be easier to use a language that makes strictness and impurity easier. (Unless you're a Haskell guru, in which case I imagine Haskell is always the most convenient language to use).

consalus:
On Tue, Nov 11, 2008 at 1:51 PM, Don Stewart
wrote: consalus:
On Tue, Nov 11, 2008 at 2:38 AM, Dave Tapley
wrote: Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
Cheers,
Dave
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
I think some would disagree with me, but I would advise against using haskell for a task that necessarily requires a lot of mutable state and IO and for which serious performance is a big factor. I'm not talking about stuff that can be approximated by zippers and whatnot, but rather situations where IORefs abound and data has identity. Haskell can quite capably do mutable state and IO, but if your task is all mutable state and IO, I'd lean toward a language that makes it easier (OCaml, perhaps).
Do you have an example of a mutable state/ IO bound application, like, hmm, a window manager or a revision control system or a file system...?
-- Don
Of course, with a lot of skill, good design, and a pile of language extensions state/io-heavy Haskell code can be clean and flexible. Performance can be pretty good, and for complex algorithmic code arguably a better choice than most other languages. Still, neither of the projects you reference (to my knowledge) have a mutation-heavy inner computation loop.
Data.ByteString is full of mutation-heavy inner loops. There's nothing magic about it. -- Don

Don Stewart
Data.ByteString is full of mutation-heavy inner loops.
I suspect you are missing Kyle's point, which I interpret to be more like what Paul Graham talks about in "ANSI Common Lisp": [OO] provides a structured way to write spaghetti code. [...] For programs that would have ended up as spaghetti anyway, the object oriented model is good: they will at least be structured spaghetti. In my opinion, Haskell is pretty bad at spaghetti. And although it is possible that some programs simply need to be spaghetti-structured, I still think not supporting it is a good thing - Haskell should instead provide the tools for writing an equivalent non-spaghettized program. Bytestrings have mutation-heavy inner loops, but localized, well-structured, and exporting a neat, pure interface, so they don't count here.
There's nothing magic about it.
Now you're just being modest. -k -- If I haven't seen further, it is by standing in the footprints of giants

Kyle, I would say that most apps don't actually require that you write
a mutation heavy inner loop. They can be written either way, and
Haskell gives you the facility to do both. Even my field, which is
visualization can be written either way. I write with a mutation
heavy inner loop myself, because it's how I started out, and I haven't
had any trouble with performance. OpenGL is always my upper bound.
Even 2D code, which I've written on occasion seems not to suffer.
On Tue, Nov 11, 2008 at 5:23 PM, Kyle Consalus
On Tue, Nov 11, 2008 at 1:51 PM, Don Stewart
wrote: consalus:
On Tue, Nov 11, 2008 at 2:38 AM, Dave Tapley
wrote: Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
Cheers,
Dave
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
I think some would disagree with me, but I would advise against using haskell for a task that necessarily requires a lot of mutable state and IO and for which serious performance is a big factor. I'm not talking about stuff that can be approximated by zippers and whatnot, but rather situations where IORefs abound and data has identity. Haskell can quite capably do mutable state and IO, but if your task is all mutable state and IO, I'd lean toward a language that makes it easier (OCaml, perhaps).
Do you have an example of a mutable state/ IO bound application, like, hmm, a window manager or a revision control system or a file system...?
-- Don
Of course, with a lot of skill, good design, and a pile of language extensions state/io-heavy Haskell code can be clean and flexible. Performance can be pretty good, and for complex algorithmic code arguably a better choice than most other languages. Still, neither of the projects you reference (to my knowledge) have a mutation-heavy inner computation loop. XMonad does all of its mutation in a custom monad that is ReaderT StateT IO or something similar, and it apparently works beautifully. However, my understanding is that stack of monad transformers tend not to be particularly efficient, and while that usually isn't an issue, the case that I'm talking about is that where mutation performance is a major concern. Other languages offer similar expressive power, minus the joys of laziness and referential transparency. Persistent data structures are great, but if you're not using the persistence it is less convenient and less efficient. So again, Haskell _can_ do mutation and IO just fine, but if laziness, purity, and immutability will be the rare exception rather than the rule, might be easier to use a language that makes strictness and impurity easier. (Unless you're a Haskell guru, in which case I imagine Haskell is always the most convenient language to use). _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- I try to take things like a crow; war and chaos don't always ruin a picnic, they just mean you have to be careful what you swallow. -- Jessica Edwards

Has there been any progress in getting ghc set up for porting to non x86/unix/windows platforms? Can it generate ropi code? It would also be nice to be able to compile to C that rvct/arm tools can compile in thumb mode. Its whats stopping me from trying to use it for mobile development.

"Anatoly Yakovenko"
Has there been any progress in getting ghc set up for porting to non x86/unix/windows platforms? Can it generate ropi code? It would also be nice to be able to compile to C that rvct/arm tools can compile in thumb mode.
AFAIK, you can bootstrap ghc onto lots of non-PC-centric platforms, using the unregisterized porting guide. However, it is not a trivial exercise, and it does not address the question of setting ghc up as a cross-compiler, should your device be too small to host ghc at all. Other Haskell compilers might be a better starting point. For instance, nhc98 uses portable C as a target language, and has a configure-time option to set it up as a cross-compiler. See http://www.cs.york.ac.uk/fp/nhc98/install.html (cross-compiler info towards the bottom of the page). The example given is for ARM/linux. Regards, Malcolm

Do you have an example of a mutable state/ IO bound application, like, hmm, a window manager or a revision control system or a file system...?
If you're looking for a challenge, how about this one (there used to be lots of Haskellers into this game, any of you still around?-): http://computer-go.org/pipermail/computer-go/2008-October/016680.html The specification is the numbered list at the end of the message, or in the zip file with the Java example implementation (README). Actually, the interesting part from a performance perspective doesn't even require GTP or AMAF, just implement the random legal playouts for a 9x9 board, do a loop over them, keeping some basic win/loose/game length statistics. Can you complete that in competitive speed, preferably from the specs, without just transliterating the low-level Java code? I won't tell you yet what typical playouts-per-second-per-core numbers are floating about for "lightly" optimised code; try to improve your code and say "stop, this is good enough" before comparing with the Java bot runtimes. You'll probably have no room left for any high-level operations, ending up with mostly imperatively threaded array manipulations in a loop that is expected to be run several thousand times per real-game move (the idea being to run lots of dumb, but legal, simulations to the end, where evaluation is easy, gather statistics, then choose the move that seems to have the most potential). Bonus points if you can preserve any of the high-level advantages of working in Haskell (such as easy debuggability, because there will be bugs, especially if you can't afford to maintain redundant information representations) without sacrificing speed. (*) As specified, the bot won't win any tournaments - it is the starting point (or remote ancestor) for the current champions, with lots of room for algorithmic improvement, not to mention actual game knowledge, tree search and parallelization (the configurations being used for tournaments and exhibition games are highly parallel). The reference spec is a recent effort, meant to provide a first milestone for aspiring Go programmers to compare their code against, while also inviting programmers to to show how their favourite language is eminently suitable for Go programming (without sacrificing performance!). There are versions in Java, D, C++, C#, and C around somewhere, possibly others. But: please, no language evangelism on that mailing list! Those people seem decent, peaceful and open-minded (there's even a Haskell spec of a simple rule set here http://homepages.cwi.nl/~tromp/go.html ). And some of them have years (a few even decades) of experience, much of which is made available in the mailinglist archives and bibliography (pointers available on request - if you wanted to get serious about playing with Go programming, you'd be expected to mine the archives and bibliography, because just about anything you'll come up with in your first few months will have been discussed already;-), in spite of commercial interests and ongoing competitions. They are already tired and wary of useless language wars (who isn't?). Claus (who has been wanting to write a Go player ever since he had to decide between research and learning more Go, either one trying to eat all available time and then some; perhaps I should write a researcher instead, and return to playing Go?-) (*) it would be nice if the sequential baseline performance of functional Haskell/Ghc arrays would be better, btw; that would make the ongoing work on parallel arrays much more interesting.

Claus Reinke wrote:
Do you have an example of a mutable state/ IO bound application, like, hmm, a window manager or a revision control system or a file system...?
If you're looking for a challenge, how about this one (there used to be lots of Haskellers into this game, any of you still around?-):
http://computer-go.org/pipermail/computer-go/2008-October/016680.html
[ catching up with old haskell-cafe email ] Interestingly, I did this a while ago. Here's my results: $ ./Bench 1 100000 b: 14840, w: 17143 mercy: 67982 elapsed time: 3.42s playouts/sec: 29208 so, nearly 30k/sec random playouts on 9x9. That's using a hack that stops the game when the score is heavily in favour of one player, it drops to around 20k/sec with that turned off. Not bad, but probably I'd guess an order of magnitude worse than you can do in tightly-coded C. The Haskell implementation isn't nice, as you predicted. Also the code is derived from some non-free internal MS code, so unfortunately I can't share it (but I could perhaps extract the free bits if anyone is really interested). W wins slightly more often I think because komi 5.5 on a 9x9 board is a tad high. It does parallelise too, of course: $ ./Bench 8 100000 +RTS -N8 b: 14872, w: 17488 mercy: 67584 elapsed time: 1.00s playouts/sec: 99908 though still room for improvement there. Cheers, Simon

Do you have an example of a mutable state/ IO bound application, like, hmm, a window manager or a revision control system or a file system...? If you're looking for a challenge, how about this one (there used to be lots of Haskellers into this game, any of you still around?-): http://computer-go.org/pipermail/computer-go/2008-October/016680.html Interestingly, I did this a while ago. Here's my results:
$ ./Bench 1 100000 b: 14840, w: 17143 mercy: 67982 elapsed time: 3.42s playouts/sec: 29208
so, nearly 30k/sec random playouts on 9x9. That's using a hack that stops the game when the score is heavily in favour of one player, it drops to around 20k/sec with that turned off.
Nice!-) 20k playouts/sec (without the early cutoffs) is the rough number usually mentioned for these light playouts, reachable even in Java. My own Haskell code for that was a factor of 5 slower:-(
Not bad, but probably I'd guess an order of magnitude worse than you can do in tightly-coded C.
Yes, a few people have reported higher rates, but most hobby coders seem happy with 20k/sec - after that, it seems more interesting to move towards heavy playouts and combinations with tree-based search instead of light playouts with simple statistics alone. But if you don't get at least those 20k/sec, it is difficult to run the number of experiments needed to test presumed improvements in playing strength.
The Haskell implementation isn't nice, as you predicted.
What is really annoying for me is that I'm no longer used to this low-level style of coding, so every time I add something, performance goes down, and I have to work to get it back (I modified my playout code to match that reference bot specification - my bot does get the expected 50% wins against jrefbot, but is now a factor of 8 slower (still using only half the memory, though)). Not to mention that I'm throwing away many of the advantages of Haskell. That is one reason why I mentioned this challenge.
Also the code is derived from some non-free internal MS code, so unfortunately I can't share it (but I could perhaps extract the free bits if anyone is really interested).
Interesting, can you tell what kind of code those internal bits are? Of course, the fun is implementing it oneself, but it is very useful to have reference points, such as the refbot spec, or the Java implementation to play against. Your Haskell reference point will spur me to have another look at my bot's performance!-) The Go programming folks have a lot of useful infrastructure, btw, including a server just for bot-vs-bot competition: http://cgos.boardspace.net/ Not to mention monthly tournaments, competions, etc.
It does parallelise too, of course:
$ ./Bench 8 100000 +RTS -N8 b: 14872, w: 17488 mercy: 67584 elapsed time: 1.00s playouts/sec: 99908
though still room for improvement there.
That is the other reason why I mentioned this challenge: the specs people use for their competition bots are interestingly parallel. One example, this year's Computer Go Olympiad results: http://www.grappa.univ-lille3.fr/icga/tournament.php?id=181 Many Faces of Go, winner, currently maxes out at 32 cores, a limitation its author would like to remove (he's working with the Microsoft HPC group, btw). For the exhibition game against a pro that made the news this year, MoGo used a cluster of 800 cores: http://www.mail-archive.com/computer-go@computer-go.org/msg08692.html http://www.mail-archive.com/computer-go@computer-go.org/msg08710.html Of course, the simple reference bot implementations are a far cry from MoGo or ManyFaces, but I thought this would be an interesting non-trivial-but-doable challenge for Haskell performance and parallelism fans, especially since there are still many people interested in Go here;-) Claus

http://computer-go.org/pipermail/computer-go/2008-October/016680.html Interestingly, I did this a while ago. Here's my results:
$ ./Bench 1 100000 b: 14840, w: 17143 mercy: 67982 elapsed time: 3.42s playouts/sec: 29208
so, nearly 30k/sec random playouts on 9x9. That's using a hack that stops the game when the score is heavily in favour of one player, it drops to around 20k/sec with that turned off.
Nice!-) 20k playouts/sec (without the early cutoffs) is the rough number usually mentioned for these light playouts, reachable even in Java. My own Haskell code for that was a factor of 5 slower:-(
actually, that 5x is relative to jrefbot on my machine (Pentium M760, 2Ghz), which doesn't quite reach 20k/sec, so if your code would run at 20k/sec on my laptop, it would be 10x as fast as my bot:-(( Since you can't release your code, could you perhaps time the jrefbot from the url above on your machine as a reference point, so that I know how far I've yet to go? Something like: $ time ((echo "genmove b";echo "quit") | d:/Java/jre6/bin/java -jar refbots/javabot/jrefgo.jar 20000) = E5 real 0m2.539s user 0m0.030s sys 0m0.031s Btw, I just realised where my bot dropped from 5x to 8x: to work around http://hackage.haskell.org/trac/ghc/ticket/2669 all my array accesses were wrapped in exception handlers, to get useful error messages while I adapted my code to the refbot spec.. That's not the only bug that got in the way: http://hackage.haskell.org/trac/ghc/ticket/2727 forced me to move from functional to imperative arrays much sooner than I wanted, and due to http://hackage.haskell.org/trac/ghc/ticket/1216 I did not even consider 2d arrays (the tuple allocations might have gotten in the way anyhow, but still..). What do those folks working on parallel Haskell arrays think about the sequential Haskell array baseline performance? Claus -- my slow bot's current time (for 20k playouts on a 2Ghz laptop): $ time ((echo "genmove b";echo "quit") | ./SimpleGo.exe 20000) TEXT e5 - amaf-score: 0.127 TEXT e6 - amaf-score: 0.126 TEXT d5 - amaf-score: 0.126 TEXT f5 - amaf-score: 0.118 TEXT d6 - amaf-score: 0.116 TEXT f4 - amaf-score: 0.115 TEXT e7 - amaf-score: 0.115 TEXT f6 - amaf-score: 0.114 TEXT d4 - amaf-score: 0.110 TEXT d3 - amaf-score: 0.108 TEXT e5 - amaf-score: 0.127 = e5 = real 0m10.711s user 0m0.030s sys 0m0.031s

claus.reinke:
http://computer-go.org/pipermail/computer-go/2008-October/016680.html Interestingly, I did this a while ago. Here's my results:
$ ./Bench 1 100000 b: 14840, w: 17143 mercy: 67982 elapsed time: 3.42s playouts/sec: 29208
so, nearly 30k/sec random playouts on 9x9. That's using a hack that stops the game when the score is heavily in favour of one player, it drops to around 20k/sec with that turned off.
Nice!-) 20k playouts/sec (without the early cutoffs) is the rough number usually mentioned for these light playouts, reachable even in Java. My own Haskell code for that was a factor of 5 slower:-(
actually, that 5x is relative to jrefbot on my machine (Pentium M760, 2Ghz), which doesn't quite reach 20k/sec, so if your code would run at 20k/sec on my laptop, it would be 10x as fast as my bot:-(( Since you can't release your code, could you perhaps time the jrefbot from the url above on your machine as a reference point, so that I know how far I've yet to go? Something like:
$ time ((echo "genmove b";echo "quit") | d:/Java/jre6/bin/java -jar refbots/javabot/jrefgo.jar 20000) = E5
real 0m2.539s user 0m0.030s sys 0m0.031s
Btw, I just realised where my bot dropped from 5x to 8x: to work around
http://hackage.haskell.org/trac/ghc/ticket/2669
all my array accesses were wrapped in exception handlers, to get useful error messages while I adapted my code to the refbot spec..
That's not the only bug that got in the way:
http://hackage.haskell.org/trac/ghc/ticket/2727
forced me to move from functional to imperative arrays much sooner than I wanted, and due to
http://hackage.haskell.org/trac/ghc/ticket/1216
I did not even consider 2d arrays (the tuple allocations might have gotten in the way anyhow, but still..).
What do those folks working on parallel Haskell arrays think about the sequential Haskell array baseline performance?
Try using a fast array library like uvector? (With no serious overhead for tuples too, fwiw)... -- Don

What do those folks working on parallel Haskell arrays think about the sequential Haskell array baseline performance?
Try using a fast array library like uvector? (With no serious overhead for tuples too, fwiw)...
I downloaded uvector a while ago, but haven't got round to trying it (yet another array API?). Mostly, I'd like to know a little more than just "fast array lib": - in which ways is it supposed to be faster? why? - for which usage patterns is it optimised? how? - if it is faster in general, why hasn't it replaced the default arrays? In general, I think Haskell has too many array libraries, with too many APIs. And that doesn't even take account the overuse of unsafe APIs, or the non-integrated type-level safety tricks - if array accesses were properly optimized, there should be a lot less need for the extreme all-or-nothing checks or home-grown alternative special-purpose APIs: - type-level code for watermarking indices belonging to certain index sets is one step to eliminate index checks, but hasn't been integrated into any of the standard libs - one could also associate index subsets with operations that do not leave the index superset belonging to an array (eg, if min

Claus Reinke wrote:
In general, I think Haskell has too many array libraries, with too many APIs. And that doesn't even take account the overuse of unsafe APIs, or the non-integrated type-level safety tricks - if array accesses were properly optimized, there should be a lot less need for the extreme all-or-nothing checks or home-grown alternative special-purpose APIs:
- type-level code for watermarking indices belonging to certain index sets is one step to eliminate index checks, but hasn't been integrated into any of the standard libs - one could also associate index subsets with operations that do not leave the index superset belonging to an array (eg, if min
At least, uvector seems to take multi-element ops more seriously. But with so many people working on sequential and parallel Haskell array libraries, I was hoping for someone to take a big ax and clear out all that sick and half-dead wood, to give a small number of healthy arrays libs room to grow. Would be a lot easier for us poor naive Haskell array users who otherwise tend to get lost in that forrest!-)
+3 The current array situation is unecessarily messy - and I prefer mathematical elegance to ad-hoc mess. (And you think I use Haskell, why exactly?) Of course, it's all very well complaining about it... somebody still has to *do* all this wonderful stuff. :-/

On Thu, Nov 27, 2008 at 1:20 PM, Claus Reinke
What do those folks working on parallel Haskell arrays think about the
sequential Haskell array baseline performance?
Try using a fast array library like uvector? (With no serious overhead for tuples too, fwiw)...
I downloaded uvector a while ago, but haven't got round to trying it (yet another array API?). Mostly, I'd like to know a little more than just "fast array lib":
- in which ways is it supposed to be faster? why? - for which usage patterns is it optimised? how? - if it is faster in general, why hasn't it replaced the default arrays?
In general, I think Haskell has too many array libraries, with too many APIs. And that doesn't even take account the overuse of unsafe APIs, or the non-integrated type-level safety tricks - if array accesses were properly optimized, there should be a lot less need for the extreme all-or-nothing checks or home-grown alternative special-purpose APIs:
- type-level code for watermarking indices belonging to certain index sets is one step to eliminate index checks, but hasn't been integrated into any of the standard libs - one could also associate index subsets with operations that do not leave the index superset belonging to an array (eg, if min
This library would satisfy most of your requirements I suspect: http://ofb.net/~frederik/vectro/draft-r2.pdf My understanding is that the author's code could be turned into a real library fairly easily if it hasn't been already. I only read the paper; I didn't go looking for the library on hackage, but the author does provide the code for the library. The author also says their Haskell code is faster than the same algorithm in Matlab. But, I have to say. Whenever you're faking dependent types in Haskell things get harder to understand for the programmer. Checkout the section in the above paper about type checking. Dependent types, even simulated ones, come with lots of cool static guarantees but understanding how to program with them comes with a high barrier to entry. I think this cognitive load is even higher in Haskell where dependent types have to simulated by doing seemingly bizarre things. I think it is this usability aspect that prevents the techniques from becoming more common in Haskell.
At least, uvector seems to take multi-element ops more seriously. But with so many people working on sequential and parallel Haskell array libraries, I was hoping for someone to take a big ax and clear out all that sick and half-dead wood, to give a small number of healthy arrays libs room to grow. Would be a lot easier for us poor naive Haskell array users who otherwise tend to get lost in that forrest!-)
Sometimes a good library design is an evolutionary process. Maybe it's time to apply a fitness function. Jason

Claus Reinke:
What do those folks working on parallel Haskell arrays think about the sequential Haskell array baseline performance?
You won't like the answer. We are not happy with the existing array infrastructure and hence have our own. Roman recently extracted some of it as a standalone package: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/vector In the longer run, we would like to factor our library into DPH- specific code and general-purpose array library that you can use independent of DPH. Manuel

Manuel M T Chakravarty wrote:
Claus Reinke:
What do those folks working on parallel Haskell arrays think about the sequential Haskell array baseline performance?
You won't like the answer. We are not happy with the existing array infrastructure and hence have our own. Roman recently extracted some of it as a standalone package:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/vector
In the longer run, we would like to factor our library into DPH-specific code and general-purpose array library that you can use independent of DPH.
So we have two vector libraries, vector and uvector, which have a lot in common - they are both single-dimension array types that support unboxed instances and have list-like operations with fusion. They ought to be unified, really. The main difference between these libraries and Haskell's arrays is the Ix class. So perhaps Haskell's arrays should be reimplemented on top of the low-level vector libraries? The Ix class is the root cause of the problems with optimising the standard array libraries. Cheers, Simon

On Fri, Nov 28, 2008 at 10:04 AM, Simon Marlow
Manuel M T Chakravarty wrote:
In the longer run, we would like to factor our library into DPH-specific code and general-purpose array library that you can use independent of DPH.
So we have two vector libraries, vector and uvector, which have a lot in common - they are both single-dimension array types that support unboxed instances and have list-like operations with fusion. They ought to be unified, really.
Yes please! Could we please have a ByteString style interface with qualified imports instead of using ad-hoc name prefixes/suffixes as a namespacing mechanism if we're going to merge the two libraries? Cheers, Johan

On Fri, 28 Nov 2008, Johan Tibell wrote:
On Fri, Nov 28, 2008 at 10:04 AM, Simon Marlow
wrote: So we have two vector libraries, vector and uvector, which have a lot in common - they are both single-dimension array types that support unboxed instances and have list-like operations with fusion. They ought to be unified, really.
Yes please! Could we please have a ByteString style interface with qualified imports instead of using ad-hoc name prefixes/suffixes as a namespacing mechanism if we're going to merge the two libraries?
StorableVector is organized this way.

On Fri, 28 Nov 2008, Simon Marlow wrote:
Manuel M T Chakravarty wrote:
Claus Reinke:
What do those folks working on parallel Haskell arrays think about the sequential Haskell array baseline performance?
You won't like the answer. We are not happy with the existing array infrastructure and hence have our own. Roman recently extracted some of it as a standalone package:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/vector
In the longer run, we would like to factor our library into DPH-specific code and general-purpose array library that you can use independent of DPH.
So we have two vector libraries, vector and uvector, which have a lot in common - they are both single-dimension array types that support unboxed instances and have list-like operations with fusion. They ought to be unified, really.
It's worse: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/storablevector :-)

Henning Thielemann wrote:
On Fri, 28 Nov 2008, Simon Marlow wrote:
Manuel M T Chakravarty wrote:
Claus Reinke:
What do those folks working on parallel Haskell arrays think about the sequential Haskell array baseline performance?
You won't like the answer. We are not happy with the existing array infrastructure and hence have our own. Roman recently extracted some of it as a standalone package:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/vector
In the longer run, we would like to factor our library into DPH-specific code and general-purpose array library that you can use independent of DPH.
So we have two vector libraries, vector and uvector, which have a lot in common - they are both single-dimension array types that support unboxed instances and have list-like operations with fusion. They ought to be unified, really.
It's worse:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/storablevector :-)
What *I* propose is that somebody [you see what I did there?] should sit down, take stock of all the multitudes of array libraries, what features they have, what obvious features they're missing, and think up a good API from scratch. Once we figure out what the best way to arrange all this stuff is, *then* we attack the problem of implementing it for real. It seems lots of people have written really useful code, but we need to take a step back and look at the big picture here before writing any more of it. IMHO, anyway...

andrewcoppin:
What *I* propose is that somebody [you see what I did there?] should sit down, take stock of all the multitudes of array libraries, what features they have, what obvious features they're missing, and think up a good API from scratch. Once we figure out what the best way to arrange all this stuff is, *then* we attack the problem of implementing it for real.
It seems lots of people have written really useful code, but we need to take a step back and look at the big picture here before writing any more of it.
No. My view would be to let the free market of developers decide what is best. No bottlenecks -- there's too many Haskell libraries already (~1000 now). And this approach has yielded more code than ever before, more libraries than ever before, and library authors are competing. So let the market decide. We're a bazaar, not a cathedral. -- Don

But I don't want Perl, I want a well designed language and well
designed libraries.
I think it's find to let libraries proliferate, but at some point you
also need to step back and abstract.
-- Lennart
On Fri, Nov 28, 2008 at 9:46 PM, Don Stewart
andrewcoppin:
What *I* propose is that somebody [you see what I did there?] should sit down, take stock of all the multitudes of array libraries, what features they have, what obvious features they're missing, and think up a good API from scratch. Once we figure out what the best way to arrange all this stuff is, *then* we attack the problem of implementing it for real.
It seems lots of people have written really useful code, but we need to take a step back and look at the big picture here before writing any more of it.
No.
My view would be to let the free market of developers decide what is best. No bottlenecks -- there's too many Haskell libraries already (~1000 now).
And this approach has yielded more code than ever before, more libraries than ever before, and library authors are competing.
So let the market decide. We're a bazaar, not a cathedral.
-- Don _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

But I don't want Perl, I want a well designed language and well designed libraries. I think it's find to let libraries proliferate, but at some point you also need to step back and abstract.
-- Lennart
Especially so if the free marketeers claim there is something fundamentally wrong with the standard libraries and language, as in the case of arrays. When someone did that nice little survey of the last bunch of array libraries (Bulat, I think? now in the wiki book), I was hoping there would be a grand unification soon. Instead, it seems that those who have worked most with Haskell arrays recently have simply abandoned all of the standard array libraries. Okay, why not, if there are good reasons. But can't you document those reasons, for each of your alternative proposals, so that people have some basis on which to choose (other than who has the loudest market voice;-)? And would it be difficult for you all to agree on a standard API, to make switching between the alternatives easy (if it is indeed impossible to unify their advantages in a single library, the reasons for which should also be documented somewhere)? And what is wrong about Simon's suggestion, to use the standard array lib APIs on top of your implementations? Not that I see Haskell' coming soon, but I'd certainly not want it to continue standardising a kind of array that appears to have no backing among the Haskell array user/library author community. Nor would I like something as central as arrays to remain outside the standard, where it won't remain stable enough for Haskell programmers to rely on in the long run. bazaar, yes; mayhem, no. Claus
On Fri, Nov 28, 2008 at 9:46 PM, Don Stewart
wrote: andrewcoppin:
What *I* propose is that somebody [you see what I did there?] should sit down, take stock of all the multitudes of array libraries, what features they have, what obvious features they're missing, and think up a good API from scratch. Once we figure out what the best way to arrange all this stuff is, *then* we attack the problem of implementing it for real.
It seems lots of people have written really useful code, but we need to take a step back and look at the big picture here before writing any more of it.
No.
My view would be to let the free market of developers decide what is best. No bottlenecks -- there's too many Haskell libraries already (~1000 now).
And this approach has yielded more code than ever before, more libraries than ever before, and library authors are competing.
So let the market decide. We're a bazaar, not a cathedral.
-- Don _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

claus.reinke:
But I don't want Perl, I want a well designed language and well designed libraries. I think it's find to let libraries proliferate, but at some point you also need to step back and abstract.
-- Lennart
Especially so if the free marketeers claim there is something fundamentally wrong with the standard libraries and language, as in the case of arrays. When someone did that nice little survey of the last bunch of array libraries (Bulat, I think? now in the wiki book), I was hoping there would be a grand unification soon. Instead, it seems that those who have worked most with Haskell arrays recently have simply abandoned all of the standard array libraries.
I don't think Bulat uploaded his libraries to hackage in the end. And if it's not on hackage, then no one will use it, so it may as well not exist.
Okay, why not, if there are good reasons. But can't you document those reasons, for each of your alternative proposals, so that people have some basis on which to choose (other than who has the loudest market voice;-)? And would it be difficult for you all to agree on a standard API, to make switching between the alternatives easy (if it is indeed impossible to unify their advantages in a single library, the reasons for which should also be documented somewhere)? And what is wrong about Simon's suggestion, to use the standard array lib APIs on top of your implementations?
Nothing. This would be great. Who's volunteering to write the code? The new 'list-like' fusible array libraries are still in alpha, anyway.
Not that I see Haskell' coming soon, but I'd certainly not want it to continue standardising a kind of array that appears to have no backing among the Haskell array user/library author community. Nor would I like something as central as arrays to remain outside the standard, where it won't remain stable enough for Haskell programmers to rely on in the long run.
Hence the Haskell Platform. To provide the stability that people rely on in the long run. Haskell' is not the process by which new libraries will be standardised -- there's simply too many libraries being produced. The platform let's us: * Take libraries out of the standardisation process. * Let developers develop in competition, and converge on agreed designs. * Let users decide what to use, rather than waste time standardising things when the developer community has already moved on. * And then publish a list of blessed code. Since arrays are just another (non-obvious) data structure, there's a huge design space: * flat and/or nested arrays? * strict or lazy or eager? * callable from C or Fortran? * mutable/immutable * polymorphic in what dimensions? * mmap-able? * read / write API, or list-like API? We've not yet found the perfect implementation, but we're learning a lot. And since it is not yet clear what the optimal solution looks like, I say let the developers keep hacking for a while, until we get an idea of what works. And by all means, if someone thinks they know what the best API is, step up and show us the implementation! -- Don

On 29/11/2008, at 10:47, Claus Reinke wrote:
But I don't want Perl, I want a well designed language and well designed libraries. I think it's find to let libraries proliferate, but at some point you also need to step back and abstract. -- Lennart
Especially so if the free marketeers claim there is something fundamentally wrong with the standard libraries and language, as in the case of arrays. When someone did that nice little survey of the last bunch of array libraries (Bulat, I think? now in the wiki book), I was hoping there would be a grand unification soon. Instead, it seems that those who have worked most with Haskell arrays recently have simply abandoned all of the standard array libraries. Okay, why not, if there are good reasons. But can't you document those reasons, for each of your alternative proposals, so that people have some basis on which to choose (other than who has the loudest market voice;-)?
I think so far, it's always been the same two reasons: efficiency and ease of use. H98 arrays are inherently inefficient and IMO not very easy to use, at least not for the things that I'm doing.
And would it be difficult for you all to agree on a standard API, to make switching between the alternatives easy (if it is indeed impossible to unify their advantages in a single library, the reasons for which should also be documented somewhere)?
Yes, it is very difficult. A sensible API for a standard array library is something that needs more research. FWIW, I don't know of any other language that has what I'd like to see in Haskell. C++ probably comes closest but they have it easy - they don't do fusion.
And what is wrong about Simon's suggestion, to use the standard array lib APIs on top of your implementations?
Again, IMO H98 arrays are only suitable for a very restricted set of array algorithms. Roman

Yes, it is very difficult. A sensible API for a standard array library is something that needs more research. FWIW, I don't know of any other language that has what I'd like to see in Haskell. C++ probably comes closest but they have it easy - they don't do fusion.
I assume you've looked at SAC? http://www.sac-home.org/ Their main research and development focus was/has been on arrays (fusion/layout/padding/tiling/slicing/data-parallelism/shape-invariance (source algorithms parameterized over array dimensionality/shape)/ whole-array ops/list-like ops/lots of surface operations reducable to a minimal set of core operations that need implementation/cache behaviour/performance/performance/performance/..). When they started out, I tried to make the point that I would have liked to have their wonderful array ideas in our otherwise wonderful language, but they preferred to follow their own way towards world domination (*). Does that sound familiar?-) Claus (*) ok, they did have a good motive: they came out of a research group that had done non-sequential functional programming in the 1980s, with all the things we see today: great speedups, shame about the sequential baseline; creating parallel threads is easy, load balancing slightly harder, but pumping (creating thread hierarchies recursively, only to see them fold into a small result, for the process to begin again) is a waste, etc.; so they decided to start from a fast sequential baseline instead of full functional language, and designed their language around arrays instead of trying to add arrays to an existing language.

On 29/11/2008, at 11:49, Claus Reinke wrote:
Yes, it is very difficult. A sensible API for a standard array library is something that needs more research. FWIW, I don't know of any other language that has what I'd like to see in Haskell. C+ + probably comes closest but they have it easy - they don't do fusion.
I assume you've looked at SAC? http://www.sac-home.org/
Yes. They have it even easier - they don't have polymorphism, they don't have higher-order functions, they don't have boxing and laziness and in a sense, they don't even do general-purpose programming, just scientific algorithms. And they have no existing language to integrate their stuff with. This is not to imply that their work isn't interesting and valuable; IMO, it just doesn't really help us when it comes to designing a Haskell array library. Roman

On Fri, 28 Nov 2008 19:00:38 -0500, Roman Leshchinskiy
On 29/11/2008, at 10:47, Claus Reinke wrote: [...]
And would it be difficult for you all to agree on a standard API, to make switching between the alternatives easy (if it is indeed impossible to unify their advantages in a single library, the reasons for which should also be documented somewhere)?
Yes, it is very difficult. A sensible API for a standard array library is something that needs more research. FWIW, I don't know of any other language that has what I'd like to see in Haskell. C++ probably comes closest but they have it easy - they don't do fusion. [...]
Would you elaborate on what you'd like to see in an array library? And perhaps which C++ array library you are thinking of? Your C++ comment caught my attention, and now I'm curious. Surely you don't mean C-style arrays. :-D Regards, Brad Larsen

On 30/11/2008, at 02:43, Brad Larsen wrote:
On Fri, 28 Nov 2008 19:00:38 -0500, Roman Leshchinskiy
wrote:
On 29/11/2008, at 10:47, Claus Reinke wrote: [...]
And would it be difficult for you all to agree on a standard API, to make switching between the alternatives easy (if it is indeed impossible to unify their advantages in a single library, the reasons for which should also be documented somewhere)?
Yes, it is very difficult. A sensible API for a standard array library is something that needs more research. FWIW, I don't know of any other language that has what I'd like to see in Haskell. C++ probably comes closest but they have it easy - they don't do fusion. [...]
Would you elaborate on what you'd like to see in an array library?
I'd like to have a library which is efficient (in particular, implements aggressive fusion), is roughly equivalent to the current standard list library in power and supports strict/unboxed/mutable arrays. It should also provide a generic framework for implementing new kinds of arrays. And eventually, it should also be usable in high- performance and parallel algorithms.
And perhaps which C++ array library you are thinking of? Your C++ comment caught my attention, and now I'm curious. Surely you don't mean C-style arrays. :-D
No, I meant vector, basic_string and deque which are provided by the standard library. Roman

Lennart Augustsson wrote:
But I don't want Perl, I want a well designed language and well designed libraries. I think it's find to let libraries proliferate, but at some point you also need to step back and abstract.
I agree.
On Fri, Nov 28, 2008 at 9:46 PM, Don Stewart
wrote: andrewcoppin:
What *I* propose is that somebody [you see what I did there?] should sit down, take stock of all the multitudes of array libraries, what features they have, what obvious features they're missing, and think up a good API from scratch. Once we figure out what the best way to arrange all this stuff is, *then* we attack the problem of implementing it for real.
It seems lots of people have written really useful code, but we need to take a step back and look at the big picture here before writing any more of it.
No.
My view would be to let the free market of developers decide what is best. No bottlenecks -- there's too many Haskell libraries already (~1000 now).
And this approach has yielded more code than ever before, more libraries than ever before, and library authors are competing.
So let the market decide. We're a bazaar, not a cathedral.
I find this kind of attitude disturbing. Are you seriously asserting that it's "bad" for people to stop and think about their designs before building? That it's "bad" for people to get together and coordinate their efforts? Would you really prefer each and every developer to reinvent the wheel until we have 50,000 equivilent but slightly different wheel implementations? Certainly you seem obsessed with the notion that "more packages on Hackage == better". Well in my book, quantity /= quality. (The latter being vastly more important than the former - while admittedly far harder to measure objectively.) I would far prefer to see one well-written library that solves the problem properly than see 25 incompatible libraries that all solve small fragments of the problem poorly. In the latter case, there will be no "competition" between libraries; everybody will just give up and not use *any* of them. You _can_ have too much choice! I really hope I'm not the only person here who sees it this way...

Excerpts from Andrew Coppin's message of Sat Nov 29 03:37:58 -0600 2008:
Are you seriously asserting that it's "bad" for people to stop and think about their designs before building?
To be fair, I don't think you're in a position to say whether the authors of these libraries took careful consideration in their design or not; unless, of course, you wrote one of them and *didn't* think about the design? Austin

Austin Seipp wrote:
Excerpts from Andrew Coppin's message of Sat Nov 29 03:37:58 -0600 2008:
Are you seriously asserting that it's "bad" for people to stop and think about their designs before building?
To be fair, I don't think you're in a position to say whether the authors of these libraries took careful consideration in their design or not; unless, of course, you wrote one of them and *didn't* think about the design?
I said "I think we should take a step back and work out a plan" and Don said "no, I think we should just keep blindly hacking away instead". So I said that seems like a very bad idea to me...

andrewcoppin:
Austin Seipp wrote:
Excerpts from Andrew Coppin's message of Sat Nov 29 03:37:58 -0600 2008:
Are you seriously asserting that it's "bad" for people to stop and think about their designs before building?
To be fair, I don't think you're in a position to say whether the authors of these libraries took careful consideration in their design or not; unless, of course, you wrote one of them and *didn't* think about the design?
I said "I think we should take a step back and work out a plan" and Don said "no, I think we should just keep blindly hacking away instead". So I said that seems like a very bad idea to me...
I didn't say that - you just made it up! And you even added fake quotes! Andrew, seriously, it's about time you contributed some code, rather than just empty noise on the list? -- Don

andrewcoppin:
My view would be to let the free market of developers decide what is best. No bottlenecks -- there's too many Haskell libraries already (~1000 now).
And this approach has yielded more code than ever before, more libraries than ever before, and library authors are competing.
So let the market decide. We're a bazaar, not a cathedral.
I find this kind of attitude disturbing.
Are you seriously asserting that it's "bad" for people to stop and think about their designs before building? That it's "bad" for people to get together and coordinate their efforts? Would you really prefer each and every developer to reinvent the wheel until we have 50,000 equivilent
Strawman, Andrew, and rather silly too. I'm aggressively in favour of reuse. That's why I advocate everyone use and contribute to Hackage, so that they can reuse other's work, and they can collaborate on existing code. The current approach of /people who do things/ is working very well. They're designing and implementing new ideas in the language, leading to interesting collaborations, and new designs, and more code, that does more things, than ever before. And I'm in favour of that. I never thought I'd say the day when people complained about there being too many libraries for Haskell. Mwhahaha! -- Don

On Fri, 2008-11-28 at 22:20 +0000, Lennart Augustsson wrote:
But I don't want Perl, I want a well designed language and well designed libraries. I think it's find to let libraries proliferate, but at some point you also need to step back and abstract.
Yes, let the ideas simmer and when we can identify a consensus then we can standardise something by including it into the Haskell platform. There's obviously judgement involved to decide when it's right to standardise. We can see all around us the results of standardising too early or too late. Duncan

On 29/11/2008, at 08:43, Andrew Coppin wrote:
What *I* propose is that somebody [you see what I did there?] should sit down, take stock of all the multitudes of array libraries, what features they have, what obvious features they're missing, and think up a good API from scratch. Once we figure out what the best way to arrange all this stuff is, *then* we attack the problem of implementing it for real.
That is the idea behind vector. I don't know how good it is but it's the best I could come up with (or will be once it's finished). That said, I don't think there is such a thing as a perfect "array API". Different libraries serve different purposes. Roman

On 28/11/2008, at 20:04, Simon Marlow wrote:
So we have two vector libraries, vector and uvector, which have a lot in common - they are both single-dimension array types that support unboxed instances and have list-like operations with fusion. They ought to be unified, really.
Yes. This shouldn't be too hard to do since both libraries are based on the internal DPH arrays. Although I have to admit that I never really looked at Don's code and have no idea how much he has changed. But it's more than that. The basic idea behind vector is to provide a common framework for "normal" arrays, ByteString, StorableVector etc. It's not finished by a long shot but (unsurprisingly) I think it goes in the right direction. The proliferation of array-like libraries is counterproductive.
The main difference between these libraries and Haskell's arrays is the Ix class. So perhaps Haskell's arrays should be reimplemented on top of the low-level vector libraries? The Ix class is the root cause of the problems with optimising the standard array libraries.
Yes, Haskell arrays should be based on a lower-level array library. I would also argue that they should be redesigned and given a more streamlined and efficient interface. The Ix class is not the only problem wrt efficiency. In particular, the H98 array library relies quite heavily on lists which makes implementing any kind of array fusion rather hard. In contrast to Ix, this is completely unnecessary, IMO. Roman

I finally got around to making my code for random Go playouts available. Here it is: http://www.haskell.org/~simonmar/goboard.tar.gz If someone were to make a nice library API on top of this and upload it to hackage, I'd be delighted. Hack away. Cheers, Simon Simon Marlow wrote:
Claus Reinke wrote:
Do you have an example of a mutable state/ IO bound application, like, hmm, a window manager or a revision control system or a file system...?
If you're looking for a challenge, how about this one (there used to be lots of Haskellers into this game, any of you still around?-):
http://computer-go.org/pipermail/computer-go/2008-October/016680.html
[ catching up with old haskell-cafe email ]
Interestingly, I did this a while ago. Here's my results:
$ ./Bench 1 100000 b: 14840, w: 17143 mercy: 67982 elapsed time: 3.42s playouts/sec: 29208
so, nearly 30k/sec random playouts on 9x9. That's using a hack that stops the game when the score is heavily in favour of one player, it drops to around 20k/sec with that turned off.
Not bad, but probably I'd guess an order of magnitude worse than you can do in tightly-coded C. The Haskell implementation isn't nice, as you predicted. Also the code is derived from some non-free internal MS code, so unfortunately I can't share it (but I could perhaps extract the free bits if anyone is really interested).
W wins slightly more often I think because komi 5.5 on a 9x9 board is a tad high.
It does parallelise too, of course:
$ ./Bench 8 100000 +RTS -N8 b: 14872, w: 17488 mercy: 67584 elapsed time: 1.00s playouts/sec: 99908
though still room for improvement there.
Cheers, Simon

I finally got around to making my code for random Go playouts available. Here it is:
Cool!-) To reciprocate, I've temporarily put a snapshot of my code here: http://www.cs.kent.ac.uk/~cr3/tmp/SimpleGo.hs I've not yet decided whether to be depressed or encouraged by the timings;-) without mercy rule, your code simulates at about 17k/s runs here, vs only about 3k/s for mine. There are some obvious aspects, such as hardcoding the boardsize isn't quite as straightforward when GTP allows to change it at runtime, but I don't think that explains the bulk of the difference. I do hope there are lots of small things to learn (perhaps you could summarize your findings in a performance tips paper?-), but at first glance, I suspect the difference in approach: my early experiments suggested that maintaining chains/strings wasn't going to be more efficient than following the affected parts of strings when needed - but I didn't think of representing strings as cyclicly referenced lists (which allows for string merge in constant instead of linear time). Nice idea!-) Thanks, Claus
If someone were to make a nice library API on top of this and upload it to hackage, I'd be delighted. Hack away.
A GTP interface would be useful, to allow playing against other bots.
Cheers, Simon
Simon Marlow wrote:
Claus Reinke wrote:
Do you have an example of a mutable state/ IO bound application, like, hmm, a window manager or a revision control system or a file system...?
If you're looking for a challenge, how about this one (there used to be lots of Haskellers into this game, any of you still around?-):
http://computer-go.org/pipermail/computer-go/2008-October/016680.html
[ catching up with old haskell-cafe email ]
Interestingly, I did this a while ago. Here's my results:
$ ./Bench 1 100000 b: 14840, w: 17143 mercy: 67982 elapsed time: 3.42s playouts/sec: 29208
so, nearly 30k/sec random playouts on 9x9. That's using a hack that stops the game when the score is heavily in favour of one player, it drops to around 20k/sec with that turned off.
Not bad, but probably I'd guess an order of magnitude worse than you can do in tightly-coded C. The Haskell implementation isn't nice, as you predicted. Also the code is derived from some non-free internal MS code, so unfortunately I can't share it (but I could perhaps extract the free bits if anyone is really interested).
W wins slightly more often I think because komi 5.5 on a 9x9 board is a tad high.
It does parallelise too, of course:
$ ./Bench 8 100000 +RTS -N8 b: 14872, w: 17488 mercy: 67584 elapsed time: 1.00s playouts/sec: 99908
though still room for improvement there.
Cheers, Simon

Claus Reinke wrote:
I finally got around to making my code for random Go playouts available. Here it is:
Cool!-) To reciprocate, I've temporarily put a snapshot of my code here: http://www.cs.kent.ac.uk/~cr3/tmp/SimpleGo.hs
I've not yet decided whether to be depressed or encouraged by the timings;-) without mercy rule, your code simulates at about 17k/s runs here, vs only about 3k/s for mine. There are some obvious aspects, such as hardcoding the boardsize isn't quite as straightforward when GTP allows to change it at runtime, but I don't think that explains the bulk of the difference.
Different board sizes would be best done by just compiling the code multiple times, for 9, 13 and 19.
I do hope there are lots of small things to learn (perhaps you could summarize your findings in a performance tips paper?-)
Partly it's making good representation choices: lots of unboxed mutable arrays, and the IntRef type. I happen to know that boxed mutable arrays expose poor behaviour in GHC's garbage collector :-) If I could unpack MutableByteArray# directly in an enclosing constructor, that would make this code a lot faster, by removing more indirections. Duncan Coutts has been thinking about similar things in the context of bytestring. Most of the other things I found were really just bugs in GHC. For example, I wanted to use newtypes in various places, but I didn't get as good code as just using a type synonym. We had problems with record selectors not optimising well, which is now fixed (I believe).
but at first glance, I suspect the difference in approach: my early experiments suggested that maintaining chains/strings wasn't going to be more efficient than following the affected parts of strings when needed - but I didn't think of representing strings as cyclicly referenced lists (which allows for string merge in constant instead of linear time). Nice idea!-)
It wasn't my idea - I don't know where it originally came from, but it was in the F# code I translated. String merge isn't really O(1), since you have to traverse one of the strings to update its string Id, maybe that would be better done by having another level of indirection. Cheers, Simon
Thanks, Claus
If someone were to make a nice library API on top of this and upload it to hackage, I'd be delighted. Hack away.
A GTP interface would be useful, to allow playing against other bots.
Cheers, Simon
Simon Marlow wrote:
Claus Reinke wrote:
Do you have an example of a mutable state/ IO bound application, like, hmm, a window manager or a revision control system or a file system...?
If you're looking for a challenge, how about this one (there used to be lots of Haskellers into this game, any of you still around?-):
http://computer-go.org/pipermail/computer-go/2008-October/016680.html
[ catching up with old haskell-cafe email ]
Interestingly, I did this a while ago. Here's my results:
$ ./Bench 1 100000 b: 14840, w: 17143 mercy: 67982 elapsed time: 3.42s playouts/sec: 29208
so, nearly 30k/sec random playouts on 9x9. That's using a hack that stops the game when the score is heavily in favour of one player, it drops to around 20k/sec with that turned off.
Not bad, but probably I'd guess an order of magnitude worse than you can do in tightly-coded C. The Haskell implementation isn't nice, as you predicted. Also the code is derived from some non-free internal MS code, so unfortunately I can't share it (but I could perhaps extract the free bits if anyone is really interested).
W wins slightly more often I think because komi 5.5 on a 9x9 board is a tad high.
It does parallelise too, of course:
$ ./Bench 8 100000 +RTS -N8 b: 14872, w: 17488 mercy: 67584 elapsed time: 1.00s playouts/sec: 99908
though still room for improvement there.
Cheers, Simon

Hi again,
On Tue, Nov 11, 2008 at 11:38, Dave Tapley
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
C does extremely well when you want to write low level exploits, such as meddling with memory, the machine stack and others. This goes also for writing device drivers, media codecs, compression algorithms etc. and applications that are mostly about copying bits (with some transformations) between various memory locations. I imagine this kind of thing would be very hard to write in Haskell. Does that count? cheers, Arnar

Dave Tapley wrote:
Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
With the appropriate caveats about particular subdomains (see final paragraph), I wouldn't use Haskell for scripting. That is, (1) for Bash-style programming where 95% of the code is just invoking *nix jobs, or (2) for very simple yet regex-heavy scripts where Perl/Awk/Sed is often used. Re #1: Honestly, I don't see anything other than a dedicated competitor being able to unseat Bourne/Bash at this task. Certainly a competitor would have much room for improvement-- what with being able to replace string-rewriting semantics with term-rewriting semantics, vastly improving type safety and catching innumerable bugs. However, with unsavory frequency, it is exactly those type-unsafe substitutions which can make shell scripting cleaner and more direct than a type-safe alternative. Having type safety as well as this sort of non-compositional structure would take a good deal of work to get right. Re #2: People often complain about spooky Perl that uses things like implicit $_ or other hidden variables. While these constructs can make any sizable project unmaintainable, for the quick and dirty jobs they're just what's needed to get things done with clarity. While ByteString code using regexes is just as fast in Haskell, it's often more than twice as long as the Perl, Sed, or Awk equivalents because many of the basic control structures (like Perl's -n, -p, -l,... flags) aren't already provided. That said, this isn't necessarily a bad thing for Haskell. "Real" programming languages often don't do so well in these areas (Perl being the exception), and they don't feel too bad about it. Both varieties of shell scripting are very much of the DSL nature; for programs with a significant amount of "actual logic" instead of mere plumbing or regexing, Haskell can certainly outshine these competitors. On the one hand, C and friends fight dirty and much work has been done so Haskell can join in on the bit-bashing glory. However, shell scripting is a whole different kind of imperative muck and very little work (that I've seen) has tried to get Haskell to jump down in the sewers with them. -- Live well, ~wren

I haven't found multitrack audio recording applications written in
Haskell. These are usually written in C++ using Jack audio or ASIO.
This probably means that it is not a good idea to write real time
audio applications in Haskell at the moment.
So, probably avoid writing applications that use a high frequency
timer and IO that should be synchronous to the timer in Haskell.
On Tue, Nov 11, 2008 at 9:02 PM, wren ng thornton
Dave Tapley wrote:
Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
With the appropriate caveats about particular subdomains (see final paragraph), I wouldn't use Haskell for scripting. That is, (1) for Bash-style programming where 95% of the code is just invoking *nix jobs, or (2) for very simple yet regex-heavy scripts where Perl/Awk/Sed is often used.
Re #1: Honestly, I don't see anything other than a dedicated competitor being able to unseat Bourne/Bash at this task. Certainly a competitor would have much room for improvement-- what with being able to replace string-rewriting semantics with term-rewriting semantics, vastly improving type safety and catching innumerable bugs. However, with unsavory frequency, it is exactly those type-unsafe substitutions which can make shell scripting cleaner and more direct than a type-safe alternative. Having type safety as well as this sort of non-compositional structure would take a good deal of work to get right.
Re #2: People often complain about spooky Perl that uses things like implicit $_ or other hidden variables. While these constructs can make any sizable project unmaintainable, for the quick and dirty jobs they're just what's needed to get things done with clarity. While ByteString code using regexes is just as fast in Haskell, it's often more than twice as long as the Perl, Sed, or Awk equivalents because many of the basic control structures (like Perl's -n, -p, -l,... flags) aren't already provided.
That said, this isn't necessarily a bad thing for Haskell. "Real" programming languages often don't do so well in these areas (Perl being the exception), and they don't feel too bad about it. Both varieties of shell scripting are very much of the DSL nature; for programs with a significant amount of "actual logic" instead of mere plumbing or regexing, Haskell can certainly outshine these competitors. On the one hand, C and friends fight dirty and much work has been done so Haskell can join in on the bit-bashing glory. However, shell scripting is a whole different kind of imperative muck and very little work (that I've seen) has tried to get Haskell to jump down in the sewers with them.
-- Live well, ~wren _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

At Tue, 11 Nov 2008 22:41:48 -0500, sam lee wrote:
I haven't found multitrack audio recording applications written in Haskell. These are usually written in C++ using Jack audio or ASIO. This probably means that it is not a good idea to write real time audio applications in Haskell at the moment.
There are at least two Haskell bindings to JACK. I wrote one of them. The big issue is (no surprise) garbage collection. Audio stuff can generate a lot of garbage fast if you aren't careful. And the stop-the-world collection can happen at unfortunate times. The uvector library might help things -- but ultimately Haskell would need a more realtime friendly garbage collector. So, realtime audio can be done in Haskell today, but is is definitely fragile at best. - jeremy

Real time audio applications are top of my list of "crazy projects I
would work on if I had a month spare". I think it might work out
nicely. My approach wouldn't be to talk directly to audio hardware
from Haskell but instead use a framework like Lava to generate low
level code from an embedded DSL. I think that would be a really
elegant way to work at a high level and yet have the result execute
*faster* than traditionally written C++ code.
--
Dan
On Tue, Nov 11, 2008 at 7:41 PM, sam lee
I haven't found multitrack audio recording applications written in Haskell. These are usually written in C++ using Jack audio or ASIO. This probably means that it is not a good idea to write real time audio applications in Haskell at the moment. So, probably avoid writing applications that use a high frequency timer and IO that should be synchronous to the timer in Haskell.
On Tue, Nov 11, 2008 at 9:02 PM, wren ng thornton
wrote: Dave Tapley wrote:
Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
With the appropriate caveats about particular subdomains (see final paragraph), I wouldn't use Haskell for scripting. That is, (1) for Bash-style programming where 95% of the code is just invoking *nix jobs, or (2) for very simple yet regex-heavy scripts where Perl/Awk/Sed is often used.
Re #1: Honestly, I don't see anything other than a dedicated competitor being able to unseat Bourne/Bash at this task. Certainly a competitor would have much room for improvement-- what with being able to replace string-rewriting semantics with term-rewriting semantics, vastly improving type safety and catching innumerable bugs. However, with unsavory frequency, it is exactly those type-unsafe substitutions which can make shell scripting cleaner and more direct than a type-safe alternative. Having type safety as well as this sort of non-compositional structure would take a good deal of work to get right.
Re #2: People often complain about spooky Perl that uses things like implicit $_ or other hidden variables. While these constructs can make any sizable project unmaintainable, for the quick and dirty jobs they're just what's needed to get things done with clarity. While ByteString code using regexes is just as fast in Haskell, it's often more than twice as long as the Perl, Sed, or Awk equivalents because many of the basic control structures (like Perl's -n, -p, -l,... flags) aren't already provided.
That said, this isn't necessarily a bad thing for Haskell. "Real" programming languages often don't do so well in these areas (Perl being the exception), and they don't feel too bad about it. Both varieties of shell scripting are very much of the DSL nature; for programs with a significant amount of "actual logic" instead of mere plumbing or regexing, Haskell can certainly outshine these competitors. On the one hand, C and friends fight dirty and much work has been done so Haskell can join in on the bit-bashing glory. However, shell scripting is a whole different kind of imperative muck and very little work (that I've seen) has tried to get Haskell to jump down in the sewers with them.
-- Live well, ~wren _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Dan Piponi wrote:
Real time audio applications are top of my list of "crazy projects I would work on if I had a month spare". I think it might work out nicely. My approach wouldn't be to talk directly to audio hardware from Haskell but instead use a framework like Lava to generate low level code from an embedded DSL. I think that would be a really elegant way to work at a high level and yet have the result execute *faster* than traditionally written C++ code.
In other words, Haskell is an excellent language for designing special-purpose compilers and interpretters for custom languages. ;-) If I knew a damned thing about IA32 assembly and dynamic linkage, I'd be tempted to try it myself...

andrewcoppin:
Dan Piponi wrote:
Real time audio applications are top of my list of "crazy projects I would work on if I had a month spare". I think it might work out nicely. My approach wouldn't be to talk directly to audio hardware from Haskell but instead use a framework like Lava to generate low level code from an embedded DSL. I think that would be a really elegant way to work at a high level and yet have the result execute *faster* than traditionally written C++ code.
In other words, Haskell is an excellent language for designing special-purpose compilers and interpretters for custom languages. ;-)
If I knew a damned thing about IA32 assembly and dynamic linkage, I'd be tempted to try it myself...
No need, http://hackage.haskell.org/cgi-bin/hackage-scripts/package/harpy Well, you still should probably know something of what you're doing..

On Thu, Nov 13, 2008 at 11:08 AM, Andrew Coppin
In other words, Haskell is an excellent language for designing special-purpose compilers and interpretters for custom languages. ;-)
If I knew a damned thing about IA32 assembly and dynamic linkage, I'd be tempted to try it myself...
You could generate assembly language instructions directly. But if you use the Haskell LLVM bindings your generated code will be (1) platform independent and (2) optimised. I think there's a cool project lurking there. -- Dan

Dan Piponi wrote:
On Thu, Nov 13, 2008 at 11:08 AM, Andrew Coppin
wrote: In other words, Haskell is an excellent language for designing special-purpose compilers and interpretters for custom languages. ;-)
If I knew a damned thing about IA32 assembly and dynamic linkage, I'd be tempted to try it myself...
You could generate assembly language instructions directly.
Yeah. I figure if I knew enough about this stuff, I could poke code numbers directly into RAM representing the opcodes of the machine instructions. Then I "only" need to figure out how to call it from Haskell. It all sounds pretty non-trivial if you ask me though... ;-) [Don't some OS versions implement execution-prevention? Presumably you'd also have to bypass that in some platform-dependent way too...]
But if you use the Haskell LLVM bindings your generated code will be (1) platform independent and (2) optimised. I think there's a cool project lurking there.
Never heard of LLVM, but from the Wikipedia description it sound like warm trippy goodness. Pitty there's no Haddock. :-( [From the build log, it looks like it failed because the build machine doesn't have the LLVM library installed. Is that really necessary just for building the docs?]

Excerpts from Andrew Coppin's message of Fri Nov 14 14:13:01 -0600 2008:
Yeah. I figure if I knew enough about this stuff, I could poke code numbers directly into RAM representing the opcodes of the machine instructions. Then I "only" need to figure out how to call it from Haskell. It all sounds pretty non-trivial if you ask me though... ;-)
Save yourself some time: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/harpy Using harpy, you can generate x86 assembly /inside/ your code and execute it, using a DSL. This makes it excellent for code generators and playing around with code generation. Here's a calculator I wrote using it: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/calc For more information, http://uebb.cs.tu-berlin.de/harpy/
Never heard of LLVM, but from the Wikipedia description it sound like warm trippy goodness. Pitty there's no Haddock. :-(
It's a pretty excellent little system, to be honest. One of the cleanest APIs I've ever used, too (especially for C++.)
[From the build log, it looks like it failed because the build machine doesn't have the LLVM library installed. Is that really necessary just for building the docs?]
It's necessary to even get through the 'cabal configure' step, since the configure script bundled with the haskell llvm bindings is run then, which checks for the llvm-c headers. Austin.

Dan Piponi schrieb:
Real time audio applications are top of my list of "crazy projects I would work on if I had a month spare". I think it might work out nicely. My approach wouldn't be to talk directly to audio hardware from Haskell but instead use a framework like Lava to generate low level code from an embedded DSL. I think that would be a really elegant way to work at a high level and yet have the result execute *faster* than traditionally written C++ code. -- Dan
On Tue, Nov 11, 2008 at 7:41 PM, sam lee
wrote: I haven't found multitrack audio recording applications written in Haskell. These are usually written in C++ using Jack audio or ASIO. This probably means that it is not a good idea to write real time audio applications in Haskell at the moment. So, probably avoid writing applications that use a high frequency timer and IO that should be synchronous to the timer in Haskell.
I do real time audio processing based on lazy storable vectors, however I do not plan to implement another GUI driven program but want to program audio algorithms and music in Haskell. http://www.haskell.org/haskellwiki/Synthesizer Although I can do some processing in real-time I don't approach the speed of say SuperCollider so far. I must rely on GHC's optimizer and sometimes it does unexpected things. I know that one of Paul Hudak's students is working on something similar, called HasSound. The DSL approach is already implemented for CSound in Haskore (again there is also a not yet published library which encapsulates this functionality) and you can also do real time sound processing by controlling SuperCollider: http://www.haskell.org/haskellwiki/SuperCollider See also: http://www.haskell.org/haskellwiki/Category:Music

I've been using HSH with good results for sysadmin tasks, and recently
uploaded HSHHelpers to hackage.
Of course with Cpan a lot of stuff has already been done for you, but
that's a library issue.
Nothing beats bash for a quicky of course, but there a lot of ways to
shoot yourself in the foot (eg global variables, hard to remember
quoting rules) that haskell protects me from. Like perl protects you,
but better.
thomas.
2008/11/12 wren ng thornton
Dave Tapley wrote:
Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
With the appropriate caveats about particular subdomains (see final paragraph), I wouldn't use Haskell for scripting. That is, (1) for Bash-style programming where 95% of the code is just invoking *nix jobs, or (2) for very simple yet regex-heavy scripts where Perl/Awk/Sed is often used.
Re #1: Honestly, I don't see anything other than a dedicated competitor being able to unseat Bourne/Bash at this task. Certainly a competitor would have much room for improvement-- what with being able to replace string-rewriting semantics with term-rewriting semantics, vastly improving type safety and catching innumerable bugs. However, with unsavory frequency, it is exactly those type-unsafe substitutions which can make shell scripting cleaner and more direct than a type-safe alternative. Having type safety as well as this sort of non-compositional structure would take a good deal of work to get right.
Re #2: People often complain about spooky Perl that uses things like implicit $_ or other hidden variables. While these constructs can make any sizable project unmaintainable, for the quick and dirty jobs they're just what's needed to get things done with clarity. While ByteString code using regexes is just as fast in Haskell, it's often more than twice as long as the Perl, Sed, or Awk equivalents because many of the basic control structures (like Perl's -n, -p, -l,... flags) aren't already provided.
That said, this isn't necessarily a bad thing for Haskell. "Real" programming languages often don't do so well in these areas (Perl being the exception), and they don't feel too bad about it. Both varieties of shell scripting are very much of the DSL nature; for programs with a significant amount of "actual logic" instead of mere plumbing or regexing, Haskell can certainly outshine these competitors. On the one hand, C and friends fight dirty and much work has been done so Haskell can join in on the bit-bashing glory. However, shell scripting is a whole different kind of imperative muck and very little work (that I've seen) has tried to get Haskell to jump down in the sewers with them.
-- Live well, ~wren _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 11 nov 2008, at 11:38, Dave Tapley wrote:
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
Working with relational databases can be a bit cumbersome. There's some bitrot in the HaskellDB packages in general, so taking a more low- level approach seems sensible, but because the lack of extensible records you don't get much help from the compiler, so you need to do most of the checking (and 'marshalling') yourself. In that sense, it's not worse than doing it with most other languages, except that for those languages there are often high-level libraries available to do, for example, cute things like object relational mapping (ORM). -- Regards, Eelco Lempsink

2008/11/11 Dave Tapley
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Let's say something controversial: I think that Haskell's type system gets in your way when you're writing one-shot scripts that don't need to conform to the highest correctness standards. Imagine typing a command at the shell prompt and getting the sort of abstract error message that Haskell compilers give every now and then, like:
whyerror.lhs:36:25: Ambiguous type variable `a' in the constraint: `Arrow a' arising from use of `>>>' at whyerror.lhs:36:25-27 Possible cause: the monomorphism restriction applied to the following: liftA2' :: forall b a1 b1 c. (a1 -> b1 -> c) -> a b a1 -> a b b1 -> a b c (bound at whyerror.lhs:36:1) unsplit' :: forall a1 b c. (a1 -> b -> c) -> a (a1, b) c (bound at whyerror.lhs:34:1) split' :: forall b. a b (b, b) (bound at whyerror.lhs:33:1) Probable fix: give these definition(s) an explicit type signature or use -fno-monomorphism-restriction
You don't want to be bothered by such monstrosities (yes, they are, even though many of you may not see it because of years of conditioning) when you're just hacking up a simple script to make a catalog of your MP3 collection / check for patterns in log files / whatever. Also, in my experience Haskell is not so good at data structures where you can't do structural recursion easily, like graphs. In such cases you want a language with easy pointers and destructive updates. You can do those things in pure Haskell by using the ST monad, but the code will be more verbose than in Java or C++, and it will occasionally drive you insane with type messages like the above (You thought you could use '$' freely instead of application? Wrong!). Regards, Reinier

Hi Reinier,
On Wed, Nov 12, 2008 at 14:22, Reinier Lamers
Also, in my experience Haskell is not so good at data structures where you can't do structural recursion easily, like graphs. In such cases you want a language with easy pointers and destructive updates. You can do those things in pure Haskell by using the ST monad, but the code will be more verbose than in Java or C++, and it will occasionally drive you insane with type messages like the above (You thought you could use '$' freely instead of application? Wrong!).
Can you give examples of what you mean and why algebraic data types are not sufficient? In my research most things are "structurally recursive" and it was because Haskell is so good at such things that I started using it. cheers, Arnar

tux_rocker:
2008/11/11 Dave Tapley
: So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Let's say something controversial: I think that Haskell's type system gets in your way when you're writing one-shot scripts that don't need to conform to the highest correctness standards. Imagine typing a command at the shell prompt and getting the sort of abstract error message that Haskell compilers give every now and then, like:
whyerror.lhs:36:25: Ambiguous type variable `a' in the constraint: `Arrow a' arising from use of `>>>' at whyerror.lhs:36:25-27 Possible cause: the monomorphism restriction applied to the following: liftA2' :: forall b a1 b1 c. (a1 -> b1 -> c) -> a b a1 -> a b b1 -> a b c (bound at whyerror.lhs:36:1) unsplit' :: forall a1 b c. (a1 -> b -> c) -> a (a1, b) c (bound at whyerror.lhs:34:1) split' :: forall b. a b (b, b) (bound at whyerror.lhs:33:1) Probable fix: give these definition(s) an explicit type signature or use -fno-monomorphism-restriction
You don't want to be bothered by such monstrosities (yes, they are, even though many of you may not see it because of years of conditioning) when you're just hacking up a simple script to make a catalog of your MP3 collection / check for patterns in log files / whatever.
Why are you using Arrows in your one shot scripts? Do you use Arrows in your shell scripts? Seems like a strawman argument.. -- Don

On Wed, 2008-11-12 at 10:50 -0800, Don Stewart wrote:
tux_rocker:
2008/11/11 Dave Tapley
: So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Let's say something controversial: I think that Haskell's type system gets in your way when you're writing one-shot scripts that don't need to conform to the highest correctness standards. Imagine typing a command at the shell prompt and getting the sort of abstract error message that Haskell compilers give every now and then, like:
whyerror.lhs:36:25: Ambiguous type variable `a' in the constraint: `Arrow a' arising from use of `>>>' at whyerror.lhs:36:25-27 Possible cause: the monomorphism restriction applied to the following: liftA2' :: forall b a1 b1 c. (a1 -> b1 -> c) -> a b a1 -> a b b1 -> a b c (bound at whyerror.lhs:36:1) unsplit' :: forall a1 b c. (a1 -> b -> c) -> a (a1, b) c (bound at whyerror.lhs:34:1) split' :: forall b. a b (b, b) (bound at whyerror.lhs:33:1) Probable fix: give these definition(s) an explicit type signature or use -fno-monomorphism-restriction
You don't want to be bothered by such monstrosities (yes, they are, even though many of you may not see it because of years of conditioning) when you're just hacking up a simple script to make a catalog of your MP3 collection / check for patterns in log files / whatever.
Why are you using Arrows in your one shot scripts?
Because a generic operator like (>>>) is a) More likely to exist, and b) easier to remember than a special case operator like --- actually, I don't think GHC does come with a standard version of (>>>) type-specialized to functions, only a version of (<<<). Or if it does, I can't remember it.
Do you use Arrows in your shell scripts?
I use pipelines. Frequently. I even type them up at the command line.
Beyond that, I was considering the pipeline
uses -l snippet_calculate\\b |
perl -lne 'chomp; print $_ unless qx{svn st | grep $_}'
(`uses' is a recursive grep-like shell function I have). The use of
perl immediately suggested that it could profitably be re-written in
Haskell (actually, in a similar FP language I've been designing --- not
relevant); my translation included a line something like
interactiveM $ filterM $ comp >>> \ x ->

So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
The most obvious cases where Haskell does not do well, for me: - When you feed it Java code. Incidentally, the same holds when you feed it C code. - When you try to write a malloc library. Stefan

"what does Haskell not do well?"
- When you feed it Java code. Incidentally, the same holds when you feed it C code.
I've heard that Haskell's new (developed in this year's GSoC) Language.C libraries were able to parse millions of lines of C code from the Linux kernel, including many gcc extensions, without a single error. That doesn't sound too shabby to me. Regards, Malcolm

Dave Tapley wrote:
Hi everyone
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
Anything with hard performance requirements, and/or that needs to run on tiny computational resources (CPU speed, RAM size, etc.) I'd say "device drivers" too, except that the House guys apparently managed to do this... I'd really love to tell everybody that "Haskell is *the* language of algorithms" - except that it tends to not be very performant. Unfortunately, with the current state of the art, high-performance computer programs still require lots of low-level details to be explicitly managed. Hopefully one day this will cease to be the case.

andrewcoppin:
So I should clarify I'm not a troll and do "see the Haskell light". But one thing I can never answer when preaching to others is "what does Haskell not do well?"
Usually I'll avoid then question and explain that it is a 'complete' language and we do have more than enough libraries to make it useful and productive. But I'd be keen to know if people have any anecdotes, ideally ones which can subsequently be twisted into an argument for Haskell ;)
Anything with hard performance requirements, and/or that needs to run on tiny computational resources (CPU speed, RAM size, etc.)
I'd say "device drivers" too, except that the House guys apparently managed to do this...
I'd really love to tell everybody that "Haskell is *the* language of algorithms" - except that it tends to not be very performant.
Depends on who's writing the Haskell in my experience. GHC's a perfectly capable compiler if you feed it the proper diet. -- Don (Board member of the "Don't think linked lists are the same as UArr Double" movement)

(1) Function as a system of N concurrent inputs and 1 output is easy essence. How about function as N concurrent inputs and M concurrent outputs? I think it's not native to lambda calculus. So "system's programming" (if we ever had such paradigm) would solve this issue, while criticizing all FP. (2) For me, I hope this category (What *not* to use Haskell for) won't include SOA. That's what I'am currently trying to decide. (3) I think if CPU's would be lambda-based, not imperative, and paradigm be stronger, most of the "why *not* to use Haskell"'s would be solved, and imperative would be in opposition instead.. with it's enthusiasts. =) They would have good answers for your question. -- View this message in context: http://www.nabble.com/What-*not*-to-use-Haskell-for-tp20436980p20755239.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.
participants (39)
-
ajb@spamcop.net
-
Anatoly Yakovenko
-
Andrew Coppin
-
Arnar Birgisson
-
Austin Seipp
-
Belka
-
Brad Larsen
-
Bulat Ziganshin
-
Claus Reinke
-
Dan Piponi
-
Dave Tapley
-
Don Stewart
-
Duncan Coutts
-
Eelco Lempsink
-
Henning Thielemann
-
Henning Thielemann
-
Jason Dagit
-
Jefferson Heard
-
Jeremy Shaw
-
Johan Tibell
-
Jonathan Cast
-
Jules Bean
-
Ketil Malde
-
Krasimir Angelov
-
Kyle Consalus
-
Lennart Augustsson
-
Malcolm Wallace
-
Malcolm Wallace
-
Manuel M T Chakravarty
-
Mauricio
-
Reinier Lamers
-
Robert Greayer
-
Roman Leshchinskiy
-
sam lee
-
Simon Marlow
-
Stefan Monnier
-
Thomas Hartman
-
Tom Hawkins
-
wren ng thornton