
Dear Haskell Cafe members Here's an open-ended question about Haskell vs Scheme. Don't forget to cc Douglas in your replies; he may not be on this list (yet)! Simon -----Original Message----- From: D. Gregor [mailto:kerrangster@gmail.com] Sent: 30 March 2008 07:58 To: Simon Peyton-Jones Subject: Haskell Hello, In your most humble opinion, what's the difference between Haskell and Scheme? What does Haskell achieve that Scheme does not? Is the choice less to do with the language, and more to do with the compiler? Haskell is a pure functional programming language; whereas Scheme is a functional language, does the word "pure" set Haskell that much apart from Scheme? I enjoy Haskell. I enjoy reading your papers on parallelism using Haskell. How can one answer the question--why choose Haskell over Scheme? Regards, Douglas

Hello Simon, Tuesday, April 1, 2008, 2:18:25 PM, you wrote:
How can one answer the question--why choose Haskell over Scheme?
1. static typing with type inference - imho, must-be for production code development. as many haskellers said, once compiler accept your program, you may be 95% sure that it contains no bugs. just try it! 2. lazy evaluation - reduces complexity of language. in particular, all control structures are usual functions while in scheme they are macros 3. great, terse syntax. actually, the best syntax among several dozens of languages i know 4. type classes machinery, together with type inference, means that code for dealing with complex data types (say, serialization) is generated on the fly and compiled right down to machine code -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On 1 apr 2008, at 13.02, Bulat Ziganshin wrote:
Hello Simon,
Tuesday, April 1, 2008, 2:18:25 PM, you wrote:
How can one answer the question--why choose Haskell over Scheme?
1. static typing with type inference - imho, must-be for production code development. as many haskellers said, once compiler accept your program, you may be 95% sure that it contains no bugs. just try it!
2. lazy evaluation - reduces complexity of language. in particular, all control structures are usual functions while in scheme they are macros
3. great, terse syntax. actually, the best syntax among several dozens of languages i know
4. type classes machinery, together with type inference, means that code for dealing with complex data types (say, serialization) is generated on the fly and compiled right down to machine code
3 and 4 are no convincing arguments for a Scheme programmer. Syntax is subjective and there Scheme implementations that can serialize entire continuations (closures), which is not possible in Haskell (at least not without GHC-API). Static typing, though it might sound constraining at first, can be liberating! How that? Because it allows you to let the type-checker work for you! By choosing the right types for your API, you can enforce invariants. For example you can let the type-checker ensure that inputs from a web-application are always quoted properly, before using them as output. A whole class of security problems is taken care of forever, because the compiler checks them for you. If you're used to REPL-based programming, it can be a bit annoying that you can't run non-type-checking code, but you get used to it. After a while you will miss the safety when you program in Scheme again. There's more, but I count on others to step in here.

2008/4/1, Bulat Ziganshin
Hello Simon,
Tuesday, April 1, 2008, 2:18:25 PM, you wrote:
How can one answer the question--why choose Haskell over Scheme?
1. static typing with type inference - imho, must-be for production code development. as many haskellers said, once compiler accept your program, you may be 95% sure that it contains no bugs. just try it!
2. lazy evaluation - reduces complexity of language. in particular, all control structures are usual functions while in scheme they are macros
3. great, terse syntax. actually, the best syntax among several dozens of languages i know
4. type classes machinery, together with type inference, means that code for dealing with complex data types (say, serialization) is generated on the fly and compiled right down to machine code
In my opinion, (1) and (3), as they are stated, are a bit dangerous if you want to convince a lisper: they represent two long standing religious wars. About (3), I see a trade-off: a rich syntax is great as long as we don't need macros. Thanks to lazy evaluation and monads, we rarely need macros in Haskell, even when writing DSLs. Sometimes, however, we do need macros (remember the arrow notation, whose need was at some time unforeseen). I think the only way we could compare the two is to make a s-expression syntax for Haskell, add macros to it (either hygienic, or with some kind of gensym), and (re)write some programs in both syntaxes. I bet it would be very difficult (if not impossible) to eliminate the trade-off. About (1), In most (if not all) dynamic vs static debate, the dynamic camp argues that the safety brought by a static type system comes at the price of lost flexibility. If we compare duck-typing and Hindley-Milner, they are right: heterogeneous collections are at best clumsy in Hindley-Milner, and overloading is near impossible. Thanks to some geniuses (could someone name them?), we have type classes and higher order types in Haskell (and even more). These two features eliminate most (if not all) need for a dynamic type system. About (4), I think type classes shines even more on simple and mundane stuff. No more clumsy "+" for ints and "+." for floats. No more passing around the "compare" fucntion. No more "should I return a Maybe type or throw an exception?" (monads can delay this question). No more <whatever I forgot>. For more impressive stuff, I think quick-check is a great example. About (2), I'm clueless. The consequences of lazy evaluation are so far-reaching I wouldn't dare entering the "Lazy vs Strict" debate. Loup

Loup Vaillant wrote:
Thanks to some geniuses (could someone name them?), we have type classes and higher order types in Haskell (and even more).
As far as names go: ... for type classes, of course Wadler, but also Blott and Kaes. ... for higher order types, well, where to start? -- Dr. Janis Voigtlaender http://wwwtcs.inf.tu-dresden.de/~voigt/ mailto:voigt@tcs.inf.tu-dresden.de

Janis Voigtlaender wrote:
Loup Vaillant wrote:
Thanks to some geniuses (could someone name them?), we have type classes and higher order types in Haskell (and even more).
As far as names go:
.... for type classes, of course Wadler, but also Blott and Kaes.
.... for higher order types, well, where to start?
Girard and Reynolds? Regards, apfelmus

apfelmus wrote:
Janis Voigtlaender wrote:
Loup Vaillant wrote:
Thanks to some geniuses (could someone name them?), we have type classes and higher order types in Haskell (and even more).
As far as names go:
.... for type classes, of course Wadler, but also Blott and Kaes.
.... for higher order types, well, where to start?
Girard and Reynolds?
Yes, that's the obvious suspects, of course. But I'm not sure I would say they brought polymorphism (assuming that's what is meant by "higher order types") to Haskell. Not in the same way Wadler and co. brought type classes, quite specifically, to Haskell. -- Dr. Janis Voigtlaender http://wwwtcs.inf.tu-dresden.de/~voigt/ mailto:voigt@tcs.inf.tu-dresden.de

2008/4/2, Janis Voigtlaender
apfelmus wrote:
Janis Voigtlaender wrote:
Loup Vaillant wrote:
Thanks to some geniuses (could someone name them?), we have type classes and higher order types in Haskell (and even more).
As far as names go:
.... for type classes, of course Wadler, but also Blott and Kaes.
.... for higher order types, well, where to start?
Girard and Reynolds?
Yes, that's the obvious suspects, of course. But I'm not sure I would say they brought polymorphism (assuming that's what is meant by "higher order types") to Haskell. Not in the same way Wadler and co. brought type classes, quite specifically, to Haskell.
By "higher order types", I meant the type of runST (ST monad), or dpSwich (in yampa). I meant things like "(forall a, a-> b) -> a -> b", and maybe existential types as well. But now you mention it, I don't know who came up with mere parametric polymorphism. (Hindley, Damas, Milner?) Anyway, thanks for the names (now I have more papers to swallow :-). Loup

Loup Vaillant wrote:
By "higher order types", I meant the type of runST (ST monad), or dpSwich (in yampa). I meant things like "(forall a, a-> b) -> a -> b"
That's then usually called "higher-rank polymorphic types", just in case you need more keywords for literature search ;-) -- Dr. Janis Voigtlaender http://wwwtcs.inf.tu-dresden.de/~voigt/ mailto:voigt@tcs.inf.tu-dresden.de

Mark Jones brought higher order polymorphism to Haskell. On Wed, Apr 2, 2008 at 8:08 AM, Janis Voigtlaender < voigt@tcs.inf.tu-dresden.de> wrote:
apfelmus wrote:
Janis Voigtlaender wrote:
Loup Vaillant wrote:
Thanks to some geniuses (could someone name them?), we have type
classes and higher order types in Haskell (and even more).
As far as names go:
.... for type classes, of course Wadler, but also Blott and Kaes.
.... for higher order types, well, where to start?
Girard and Reynolds?
Yes, that's the obvious suspects, of course. But I'm not sure I would say they brought polymorphism (assuming that's what is meant by "higher order types") to Haskell. Not in the same way Wadler and co. brought type classes, quite specifically, to Haskell.
-- Dr. Janis Voigtlaender http://wwwtcs.inf.tu-dresden.de/~voigt/http://wwwtcs.inf.tu-dresden.de/%7Evoigt/ mailto:voigt@tcs.inf.tu-dresden.de _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Tue, Apr 1, 2008 at 1:02 PM, Bulat Ziganshin
Hello Simon,
Tuesday, April 1, 2008, 2:18:25 PM, you wrote:
How can one answer the question--why choose Haskell over Scheme?
Well as a longtime Scheme and OCaml programmer, and Haskell-cafe lurker, I'll take a stab at this...
1. static typing with type inference - imho, must-be for production code development. as many haskellers said, once compiler accept your program, you may be 95% sure that it contains no bugs. just try it!
I think this is the biggest, and most obvious, difference to consider when choosing either Scheme or Haskell over the other -- for a particular problem. Dynamic and static typing each have their advantages, depending on the context. I think it's dangerous to try to answer the question "Scheme or Haskell?" without a problem context.
2. lazy evaluation - reduces complexity of language. in particular, all control structures are usual functions while in scheme they are macros
Well, if I don't have side effects (and don't mind extra, unneeded evaluations), I can write my conditionals as functions in Scheme too. Heck, now that I think of it I can even avoid those extra evaluations and side-effect woes if i require promises for each branch of the conditional. No macros required... I think some problems are just more naturally modeled with lazy thinking, and a language with implicit support for lazy evaluation is a _huge_ win then. I written plenty of lazy Scheme, and OCaml for that matter, code where I wished and wished that it just supported lazy evaluation semantics by default. Again, I think this is highly problem dependent, though I think you win more with lazy evaluation in the long run. Do more experienced Haskellers than me have the opposite experience? I mean, do you ever find yourself forcing strict evaluation so frequently that you just wish you could switch on strict evaluation as a default for a while?
3. great, terse syntax. actually, the best syntax among several dozens of languages i know
I think this is possibly the weakest reason to choose Haskell over Scheme. Lispers like the regularity of the syntax of S-expressions, the fact that there is just one syntactic form to learn, understand, teach, and use. For myself, I find them to be exactly the right balance between terseness and expressiveness. For me, Haskell syntax can be a bit impenetrable at times unless I squint (and remember I'm also an OCaml programmer). Once you "get it," though, I agree that the brevity and expressiveness of Haskell is really beautiful.
4. type classes machinery, together with type inference, means that code for dealing with complex data types (say, serialization) is generated on the fly and compiled right down to machine code
This is obviously related to #1, and Haskell sure does provide a lot of fancy, useful machinery for manipulating types -- machinery whose functionality is tedious at best to mimic in Scheme, when even possible. In short, I think the orginal question must be asked in context. For some problems, types are just a natural way to start thinking about them. For others dynamic typing, with _judicious_ use of macros to model key aspects, is the most natural approach. For the problems that straddle the fence, I usually pick the language I am most familiar with (Scheme) if there are any time constraints on solving it, and the language I'm least familiar with (Haskell, right now) if I have some breathing room and can afford to learn something in the process. Cheers, -Andy
-- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

2008/4/1, Andrew Bagdanov
In short, I think the orginal question must be asked in context. For some problems, types are just a natural way to start thinking about them. For others dynamic typing, with _judicious_ use of macros to model key aspects, is the most natural approach.
Do you have any example? I mean, you had to choose between Scheme and Ocaml, sometimes, right? Ocaml is not Haskell, but maybe the reasons which influenced your choices would have been similar if you knew Haskell instead of Ocaml. Cheers, Loup

On Tue, Apr 1, 2008 at 4:55 PM, Loup Vaillant
2008/4/1, Andrew Bagdanov
: In short, I think the orginal question must be asked in context. For some problems, types are just a natural way to start thinking about them. For others dynamic typing, with _judicious_ use of macros to model key aspects, is the most natural approach.
Do you have any example? I mean, you had to choose between Scheme and Ocaml, sometimes, right? Ocaml is not Haskell, but maybe the reasons which influenced your choices would have been similar if you knew Haskell instead of Ocaml.
Sure. This may not be the best example, but it's the most immediate one for me. I'll try to be brief and hopefully still clear... Years ago I implemented an image processing system based on algorithmic patterns of IP defined over algebraic pixel types (algebraic in the ring, field, vector space sense). Here's a link to the chapter from my dissertation, for the chronically bored: http://www.micc.unifi.it/~bagdanov/thesis/thesis_08.pdf This was partially motivated by the observation that a lot of image processing is about _types_ and about getting them _right_. There's a complex interplay between the numerical, computational and perceptual semantics of the data you need to work with. A functional programming language with strict typing and type inference seemed ideal for modeling this. You get plenty of optimizations for "free" when lifting primitive operations to work on images (except OCaml functors really let me down here), and you don't have to worry figuring out what someone means when convolving a greyscale image with a color image -- unless you've already defined an instantiation of the convolution on these types that has a meaningful interpretation. Where "meaningful" is of course up to the implementor. In retrospect, if I had it all over to do again, I might choose Scheme over OCaml specifically because of dynamic typing. Or more flexible typing, rather. To completely define a new pixel datatype it is necessary to define a flotilla of primitive operations on it (e.g. add, mul, neg, div, dot, abs, mag, ...) but for many higher-level operations, only a handful were necessary. For example, for a standard convolution, mul and add are sufficient. In cases like this, I would rather explicitly dispatch at a high level -- in a way that admits partial implementations of datatypes to still play in the system. In retro-retrospect, the structural typing of OCaml objects could probably do this pretty well too... Oh well. This is a case where the resulting system was difficult to use in the exploratory, experimental it was intended to be used, in my opinion because typing got in the way. Strict typing and type inference were a huge boon for the design and implementation. I would consider Haskell this time around too (I think I did all those years ago too), as I think lazy evaluation semantics, direct support of monadic style, and yes, even it's terse syntax, could address other aspects of the domain that are untouched. I don't have a clear enough understanding of or experience with Haskell type classes, but my intuition is that I'd have the same problems with typing as I did with OCaml. Cheers, -Andy
Cheers, Loup

Just random thoughts here. Andrew Bagdanov wrote:
Well, if I don't have side effects (and don't mind extra, unneeded evaluations), I can write my conditionals as functions in Scheme too. Heck, now that I think of it I can even avoid those extra evaluations and side-effect woes if i require promises for each branch of the conditional. No macros required...
This is essentially doing lazy evaluation in Scheme. It's certainly possible; just clumsy. You must explicitly say where to force evaluation; but if you think about it, the run-time system already knows when it needs a value. This is very analogous to having type inference instead of explicitly declaring a bunch of types as in Java or C++.
Again, I think this is highly problem dependent, though I think you win more with lazy evaluation in the long run. Do more experienced Haskellers than me have the opposite experience? I mean, do you ever find yourself forcing strict evaluation so frequently that you just wish you could switch on strict evaluation as a default for a while?
The first thing I'd say is that Haskell, as a purely functional language that's close enough to the pure lambda calculus, has unique normal forms. Furthermore, normal order (and therefore lazy) evaluation is guaranteed to be an effective evaluation order for reaching those normal forms. Therefore, forcing strictness can never be needed to get a correct answer from a program. (Applicative order evaluation does not have this property.) Therefore, strictness is merely an optimization. In some cases, it can improve execution time (by a constant factor) and memory usage (by a lot). In other cases, it can hurt performance by doing calculations that are not needed. In still more cases, it is an incorrect optimization and can actually break the code by causing certain expressions that should have an actual value to become undefined (evaluate to bottom). I've certainly seen all three cases. There are certainly situations where Haskell uses a lot of strictness annotations. For example, see most of the shootout entries. In practice, though, code isn't written like that. I have rarely used any strictness annotations at all. Compiling with optimization in GHC is usually good enough. The occasional bang pattern (often when you intend to run something in the interpreter) works well enough. (As an aside, this situation is quite consistent with the general worldview of the Haskell language and community. Given that strictness is merely an optimization of laziness, the language itself naturally opts for the elegant answer, which is lazy evaluation; and then Simon and friends work a hundred times as hard to make up for it in GHC!)
I think this is possibly the weakest reason to choose Haskell over Scheme. Lispers like the regularity of the syntax of S-expressions, the fact that there is just one syntactic form to learn, understand, teach, and use.
I am strongly convinced, by the collective experience of a number of fields of human endeavor, that noisy syntax gets in the way of understanding. Many people would also say that mathematical notation is a bit impenetrable -- capital sigmas in particular seem to scare people -- but I honestly think we'd be a good ways back in the advancement of mathematical thought if we didn't have such a brief and non-obstructive syntax for these things. Mathematicians are quite irregular. Sometimes they denote that y depends on x by writing y(x); sometimes by writing y_x (a subscript); and sometimes by writing y and suppressing x entirely in the notation. These are not arbitrary choices; they are part of how human beings communicate with each other, by emphasizing some things, and suppressing others. If one is to truly believe that computer programs are for human consumption, then striving for regularity in syntax doesn't seem consistent. Initially, syntax appears to be on a completely different level from all the deep "semantic" differences; but they are in reality deeply interconnected. The earlier comment I made about it being clumsy to do lazy programming in Scheme was precisely that the syntax is too noisy. Other places where lazy evaluation helps, in particular compositionality, could all be simulated in Scheme, but you'd have to introduce excessive syntax. The result of type inference is also a quieter expression of code. So if concise syntax is not desirable, then one may as well throw out laziness and type inference as well. Also, sections and currying. Also, do notation. And so on.
In short, I think the orginal question must be asked in context. For some problems, types are just a natural way to start thinking about them. For others dynamic typing, with _judicious_ use of macros to model key aspects, is the most natural approach.
I wouldn't completely rule out, though, the impact of the person solving the problem on whether type-based problem solving is a natural and useful way to solve problems. Indeed, I would guess this probably ends up being a greater factor, in the end, than the problem. Unfortunately, we (as a general programming community) have caged ourselves into a box with the mantra "use the right tool for the job" such that we find it far too easy to dismiss something as the wrong tool merely because it is unfamiliar. My not being able to use a tool well may be a good reason for me, but it's a lousy reason for me to advise someone else.

On Tue, Apr 1, 2008 at 5:37 PM, Chris Smith
Just random thoughts here.
Same here...
Andrew Bagdanov wrote:
Well, if I don't have side effects (and don't mind extra, unneeded evaluations), I can write my conditionals as functions in Scheme too. Heck, now that I think of it I can even avoid those extra evaluations and side-effect woes if i require promises for each branch of the conditional. No macros required...
This is essentially doing lazy evaluation in Scheme. It's certainly possible; just clumsy. You must explicitly say where to force evaluation; but if you think about it, the run-time system already knows when it needs a value. This is very analogous to having type inference instead of explicitly declaring a bunch of types as in Java or C++.
Boy is it ever clumsy, and I like your analogy too. But lazy evaluation semantics typically come with purity, which is also a fairly heavy burden to foist onto the user... Certainly not without benefits, but at times a burden nonetheless...
Again, I think this is highly problem dependent, though I think you win more with lazy evaluation in the long run. Do more experienced Haskellers than me have the opposite experience? I mean, do you ever find yourself forcing strict evaluation so frequently that you just wish you could switch on strict evaluation as a default for a while?
The first thing I'd say is that Haskell, as a purely functional language that's close enough to the pure lambda calculus, has unique normal forms. Furthermore, normal order (and therefore lazy) evaluation is guaranteed to be an effective evaluation order for reaching those normal forms. Therefore, forcing strictness can never be needed to get a correct answer from a program. (Applicative order evaluation does not have this property.)
I thought that in a pure functional language any evaluation order was guaranteed to reduce to normal form. But then it's been a very, very long time since I studied the lambda calculus...
Therefore, strictness is merely an optimization. In some cases, it can improve execution time (by a constant factor) and memory usage (by a lot). In other cases, it can hurt performance by doing calculations that are not needed. In still more cases, it is an incorrect optimization and can actually break the code by causing certain expressions that should have an actual value to become undefined (evaluate to bottom). I've certainly seen all three cases.
There are certainly situations where Haskell uses a lot of strictness annotations. For example, see most of the shootout entries. In practice, though, code isn't written like that. I have rarely used any strictness annotations at all. Compiling with optimization in GHC is usually good enough. The occasional bang pattern (often when you intend to run something in the interpreter) works well enough.
(As an aside, this situation is quite consistent with the general worldview of the Haskell language and community. Given that strictness is merely an optimization of laziness, the language itself naturally opts for the elegant answer, which is lazy evaluation; and then Simon and friends work a hundred times as hard to make up for it in GHC!)
Yeah, I'm actually pretty convinced on the laziness issue. Lazy evaluation semantics are a big win in many ways.
I think this is possibly the weakest reason to choose Haskell over Scheme. Lispers like the regularity of the syntax of S-expressions, the fact that there is just one syntactic form to learn, understand, teach, and use.
I am strongly convinced, by the collective experience of a number of fields of human endeavor, that noisy syntax gets in the way of understanding. Many people would also say that mathematical notation is a bit impenetrable -- capital sigmas in particular seem to scare people -- but I honestly think we'd be a good ways back in the advancement of mathematical thought if we didn't have such a brief and non-obstructive syntax for these things. Mathematicians are quite irregular. Sometimes they denote that y depends on x by writing y(x); sometimes by writing y_x (a subscript); and sometimes by writing y and suppressing x entirely in the notation. These are not arbitrary choices; they are part of how human beings communicate with each other, by emphasizing some things, and suppressing others. If one is to truly believe that computer programs are for human consumption, then striving for regularity in syntax doesn't seem consistent.
All good points, but "noisy" is certainly in the eye of the beholder. I'd make a distinction between background and foreground noise. A simple, regular syntax offers less background noise. I don't have to commit lots of syntactic idioms and special cases to memory to read and write in that language. Low background noise in Scheme, and I'm responsible for whatever foreground noise I create with my syntactic extensions. Haskell, with more inherent syntax, has more background noise but perhaps limits the amount of foreground noise I can introduce because it constrains me from the beginning... OK, this analogy is starting to suck, so I'll move on... More on the mathematical notation analogy below.
Initially, syntax appears to be on a completely different level from all the deep "semantic" differences; but they are in reality deeply interconnected. The earlier comment I made about it being clumsy to do lazy programming in Scheme was precisely that the syntax is too noisy. Other places where lazy evaluation helps, in particular compositionality, could all be simulated in Scheme, but you'd have to introduce excessive syntax. The result of type inference is also a quieter expression of code. So if concise syntax is not desirable, then one may as well throw out laziness and type inference as well. Also, sections and currying. Also, do notation. And so on.
Well, yes. However, I think, and have found through personal experience, that notation is one of the most personal aspects of mathematics, and one of the most difficult things to get "right." Or even "pretty good." Good notation that conveys meaning without distraction and confusion (ah yes, the interconnection between syntax and semantics rears it's head again) is extremely difficult to invent. Like in mathematics, at the boundaries of creativity, I prefer the freedom to create my own notation and syntax in order to express my ideas about computation, and to adapt them to whatever relevant community standards are appropriate (physicists, signal processors, and image processing folks all have their own notations for convolution). Haskell has an elegant and expressive, but already quite defined and complex, syntax that is not particularly malleable. Returning to your line of reasoning, why should one have to commit to a single, already defined syntax instead of a minimal, regular one which admits the possibility of extending and modifying the syntax to fit my needs? It's a double-edged blade, obviously, because if I botch my syntax extensions I ruin the regularity and simplicity I originally had and don't communicate clearly to others or myself. It's the same risk as in mathematics, and just a hard to get "right." Do notation and sections are both syntactic constructs that irk me at an almost visceral level about Haskell syntax, but I certainly see the convenience of both and recognize that my opinion is wholly subjective. That Scheme applications are non-currying by default is one of the things that irks me now about Scheme...
In short, I think the orginal question must be asked in context. For some problems, types are just a natural way to start thinking about them. For others dynamic typing, with _judicious_ use of macros to model key aspects, is the most natural approach.
I wouldn't completely rule out, though, the impact of the person solving the problem on whether type-based problem solving is a natural and useful way to solve problems. Indeed, I would guess this probably ends up being a greater factor, in the end, than the problem. Unfortunately, we (as a general programming community) have caged ourselves into a box with the mantra "use the right tool for the job" such that we find it far too easy to dismiss something as the wrong tool merely because it is unfamiliar. My not being able to use a tool well may be a good reason for me, but it's a lousy reason for me to advise someone else.
Yes and no. I see your point about the person being central to the problem solving itself, and "the right tool for the right job" implicitly assumes that there is on "right" solution for a problem. However, I can pretty much only base my advice to someone about what tools to use on my own experience with those tools (qualified, of course, with the upfront admission that it's based on personal experience), and I would likewise be wary myself of someone giving advice that wasn't tempered by experience. Theoretical discussions about the theoretical underpinnings of tools seem to be an equally lousy basis for such advice. Cheers, -Andy
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I've just got a minute, so I'll answer the factual part. Andrew Bagdanov wrote:
I thought that in a pure functional language any evaluation order was guaranteed to reduce to normal form. But then it's been a very, very long time since I studied the lambda calculus...
If you don't have strong normalization, such as is the case with Haskell, then you can look at the language as being a restriction of the pure untyped lambda calculus. In that context, you know that: (a) a given expression has at most one normal form, so that *if* you reach a normal form, it will always be the right one; and (b) normal order evaluation (and therefore lazy evaluation) will get you to that normal form if it exists. Other evaluation strategies may or may not reach the normal form, even if the expression does have a normal form. You may be thinking of typed lambda calculi, which tend to be strongly normalizing. Unlike the case with the untyped lambda calculus, in sound typed lambda calculi every (well-typed) term has exactly one normal form, and every evaluation strategy reaches it. However, unrestricted recursive types break normalization. This is not entirely a bad thing, since a strongly normalizing language can't be Turing complete. So real- world programming languages tend to provide recursive types and other features that break strong normalization. I'm sure there are others here who know this a lot better than I. I'm fairly confident everything there is accurate, but I trust someone will correct me if that confidence is misplaced. -- Chris Smith

On Tuesday 01 April 2008 12.18.25 Simon Peyton-Jones wrote:
How can one answer the question--why choose Haskell over Scheme?
Regards,
Douglas
For me, who is still a Haskell newbie, the following are the main reasons why I fell in love with Haskell and prefer it over Scheme/Common Lisp today. 1) Pattern matching Being able to type for example: fact 0 = 1 fact n = n * (fact (n - 1)) instead of having to write the conditionals and if/case statements every time I write a function is amazing. It makes simple funtctions _much_ shorter, easier to understand and faster to write 2) Static typing Having static typing eliminates tons of bugs at compile time that wouldn't show up until runtime in a dynamic language and does it while giving very clear error messages. And most of the time I don't even have to do anything to get it. The compiler figures it out all by is self. 3) Prettier syntax Yes, S-expressions are conceptually amazing. Code is data is code, macros, backquotes and so on. But I've found that most of the code I write doesn't need any of that fancy stuff and is both shorter and clearer in Haskell 4) List comprehension I fell in love with in in Python and Haskells version is even more beautiful/powerful. Being able to write in one line expression that would normally require multiple 'map's and 'filters' is something I'll have a hard time living without now. Later I've found even more reasons to prefer Haskell over Scheme, for example monads, classes, speed, parallellism, laziness, arrows and Parsec but the list above are the first things that caught my eye and made me switch. /Tomas Andersson

On Tue, Apr 1, 2008 at 3:18 AM, Simon Peyton-Jones
Dear Haskell Cafe members
Here's an open-ended question about Haskell vs Scheme. Don't forget to cc Douglas in your replies; he may not be on this list (yet)!
Simon
No one seems to have pointed out how friendly the Haskell community is. Not only can you email the language designers (and they respond) - they'll even help you answer your question! I don't want to encourage more unsolicited email to Simon, but I'm impressed. Justin

Simon Peyton-Jones
Dear Haskell Cafe members
Here's an open-ended question about Haskell vs Scheme. Don't forget to cc Douglas in your replies; he may not be on this list (yet)!
Simon
-----Original Message----- From: D. Gregor [mailto:kerrangster@gmail.com] Sent: 30 March 2008 07:58 To: Simon Peyton-Jones Subject: Haskell
Hello,
In your most humble opinion, what's the difference between Haskell and Scheme? What does Haskell achieve that Scheme does not? Is the choice less to do with the language, and more to do with the compiler? Haskell is a pure functional programming language; whereas Scheme is a functional language, does the word "pure" set Haskell that much apart from Scheme? I enjoy Haskell. I enjoy reading your papers on parallelism using Haskell. How can one answer the question--why choose Haskell over Scheme?
In my most humble of opinions, the comparison between Haskell and Scheme is just methodologically incorrect. What I mean is that these are actually different kinds of entities, despite they both are called "programming languages". In particular, Scheme is nothing but a minimal core of a programming language -- despite it being Turing complete, one can hardly write any serious, "real-world" program in pure Scheme, as defined by IEEE or whatever. So Scheme is, to my mind, what is it called -- a scheme, which different implementors supply with various handy additions. And we do not have any "leading" Scheme implementation that would count as a de facto definition of a "real" Scheme language. Thus we conclude that the term "Scheme" denotes not a programming language, but rather a family of programming languages. On the other hand, Haskell, as defined by The Report (well, plus FFI addendum) is a true solid "real-world" language which can actually be used for real-world programming as it is. And we do have a dominating implementation as well, etc etc. Thus: a methodologically correct comparison should be done either between two implementations (Bigloo vs GHC, or MIT Scheme vs Hugs or Stalin vs Jhc or whatever you like) or on the level of PL families and then we'd have Scheme versus Haskell+Helium+Clean+maybe even Miranda+whatever else. In the latter case we'd have two choices again: comparing "upper bounds" or "lower bounds", that is, comparing sets of features provided by any representative of a class or by *all* representatives. Needless to say that the outcome would differ drastically depending on which way we take. -- S. Y. A(R). A.

This one's easy to answer: When I studied Scheme, I did not have an uncontrollable urge to pore through arcane papers trying to find out what the heck a natural transformation was, or a Kleisli arrow, or wonder how you can download Theorems for Free instead of having to pay for them, or see if I really could write a program only in point-free fashion. Nor did I use to take perfectly working code and refactor it until it cried for mercy, and then stay awake wondering if there was some abstraction out there I was missing that would really make it sing. You can debate the role of Haskell as a programming language per se, but when it comes to consciousness-raising, the jury is in...Haskell is my drug of choice! Dan Simon Peyton-Jones wrote:
Dear Haskell Cafe members
Here's an open-ended question about Haskell vs Scheme. Don't forget to cc Douglas in your replies; he may not be on this list (yet)!
Simon
-----Original Message----- From: D. Gregor [mailto:kerrangster@gmail.com] Sent: 30 March 2008 07:58 To: Simon Peyton-Jones Subject: Haskell
Hello,
In your most humble opinion, what's the difference between Haskell and Scheme? What does Haskell achieve that Scheme does not? Is the choice less to do with the language, and more to do with the compiler? Haskell is a pure functional programming language; whereas Scheme is a functional language, does the word "pure" set Haskell that much apart from Scheme? I enjoy Haskell. I enjoy reading your papers on parallelism using Haskell. How can one answer the question--why choose Haskell over Scheme?
Regards,
Douglas
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Tue, Apr 1, 2008 at 3:41 PM, Dan Weston
Nor did I use to take perfectly working code and refactor it until it cried for mercy, and then stay awake wondering if there was some abstraction out there I was missing that would really make it sing.
I find myself doing this in Scheme and Ruby too - it's actually one of my rules-of-thumb for picking languages that I'd like to invest in. martin
participants (14)
-
Andrew Bagdanov
-
apfelmus
-
Artem V. Andreev
-
Bulat Ziganshin
-
Chris Smith
-
Dan Weston
-
Janis Voigtlaender
-
Justin Bailey
-
Lennart Augustsson
-
Loup Vaillant
-
Martin DeMello
-
Simon Peyton-Jones
-
Thomas Schilling
-
Tomas Andersson