
Wren, I see at least three different issues being discussed here. I think it is important to delineate them: 1) Does Haskell and its libraries need performance improvements? Probably yes. Some of the performance issues seem to be related to the way the language is implemented and others by how it is defined. Developers really do run into performance issues with Haskell and either learn to work around the issue or try to fix the offending implementation. The wiki performance page gives insight into some of the performance issues and how address them. 2) Are language performance comparisons useful? They can be, if the comparison is focusing on what each language does best. They aren't useful if they focus on benchmarks that aren't relevant to the target domain of each language. I think the problem with current comparisons, is that they are designed to favor imperative languages. 3) Are the negative perceptions generated by popular performance comparisons important? Only if you care about the broader adoption of the language. If you don't care, then the point is moot. If you do care, then yes the comparisons are unfair and simple minded, but they do have an affect on the percepts of the language. It is not uncommon for management to raise roadblocks to the introduction of new technologies due to flawed reports that were published in popular magazines. If Haskell were a product and I was responsible for its success, then I would be using common marketing techniques to combat the negative perceptions. One is a technique called "changing the playing field" which means producing documents that reset the criteria for the evaluations and comparisons. This is not "deception", but merely making sure that potential users understand where and how to make the proper comparisons, or more importantly that my "champion" was well armed with documentation to counter argue popular but flawed comparisons. On 5/16/2012 6:54 AM, wren ng thornton wrote:
Haskell is just too different, and its in-the-large cost model is too poorly understood. I'm not even aware of any helpful generalizations over comparing two specific programs. Even when restricting to things like parsing or compute-intensive numeric code, the comparison comes back with mixed answers.