RE: newbie conceptual question [from haskell list]

[redirected from haskell@haskell.org to haskell-cafe@haskell.org] David Hughes wrote:
It seems to me that I can still use functional programming paradigm with an imperative language. How can I benefit more from a functional programming language (i.e.Haskell)?
Point One: functional languages are powerful. [..] Point Two: functional languages are beautiful. [..] Point Three: functional languages are robust; [..]
These things are nice, but the more I learn about functional languages, the more I feel that they are only icing on the cake. The most important thing about functional languages is that we know _what they are_, and that they can be characterized in a regular fashion which is amenable to analysis; in other words, we know how to reason about them, and what sorts of properties functional programs satisfy. Saying that a language "satisfies nice properties" sounds abstract and useless to some people who just want to get things done, but I really believe that it helps not only researchers who know know how to read and use formal semantics, but also everyday programmers, because they absorb some of these reasoning methods and then start using them intuitively and unconsciously. For example, every Haskell programmer has some grasp of referential transparency, and uses it implicitly every time he thinks about how to structure or rearrange his programs. Of course, this argument holds for any "simple" language, not just functional ones, where by "simple" I mean "easy to reason about". For example, there are concurrent calculi of processes, object calculi and first-order algebraic languages which also share this property, and I prefer any of them to warty conventional languages. (And this is why I also like SML: even though it's not referentially transparent like Haskell, I know what sorts of properties it satisfies.)
Point Four: functional languages are difficult are learn. [..]
I would say "there is more in functional languages to learn about than there is in conventional languages". I get the feeling that programmers who come into FP from more conventional languages expect that FP will be just like C, only with a few new features and the syntax mixed up a bit, because if you look at the world of conventional languages---C, C++, Pascal, Java, Python, Perl, Ruby, FORTRAN---it is almost invariably true: there is hardly any difference between these languages, except surface syntax, and the fact that in some of them you have to macro-define what is a built-in in another. So they are all about the same level of difficulty, because they are really no genuinely new concepts, or at least they are not encapsulated in the language. Also, they are mostly implementation-defined: if you roughly know the von Neumann architecture, then you know what to expect. Haskell (and ML) really is different. First, it's not implementation-defined. Second, they satisfy some genuinely new properties. Some of these languages have closures, for example, but they don't obey the same properties as Haskell's higher-order functions because you don't have referential transparency. So you have to start thinking, "OK, is this one of those situations where I _can't_ use this property?", and pretty soon that's all you're thinking about---the exceptions to the rule. (At least, this is what happens to me in, for example, Emacs LISP, where closures can only be passed downwards.) In Haskell, you never have to think about that, because though the context may affect the efficiency of a program, it cannot affect its meaning.
Point Five: there's a dearth of libraries. [..]
Philip Wadler, "What The Hell Are Monads?"
The second one is by Noel Winstanley, not Wadler. --- Frank Atanassow, Information & Computing Sciences, Utrecht University Padualaan 14, PO Box 80.089, 3508TB Utrecht, The Netherlands Tel +31 (0)30 253-3261 Fax +31 (0)30 251-3791

[Frank Atanassow
[redirected from haskell@haskell.org to haskell-cafe@haskell.org]
David Hughes wrote:
It seems to me that I can still use functional programming paradigm with an imperative language. How can I benefit more from a functional programming language (i.e.Haskell)?
Point One: functional languages are powerful. [..] Point Two: functional languages are beautiful. [..] Point Three: functional languages are robust; [..]
These things are nice, but the more I learn about functional languages, the more I feel that they are only icing on the cake. The most important thing about functional languages is that we know _what they are_, and that they can be characterized in a regular fashion which is amenable to analysis; in other words, we know how to reason about them, and what sorts of properties functional programs satisfy.
agreed, and this brings up another issue that has, up to this point, not been mentioned... the well-understood (and theoretically guaranteed) properties of functional languages allow compilers/interpreters to do some much smarter things with functional constructs... this allows very abstract language features to be implemented much more efficiently. to return to the original question: "why not just write in a functional style in an imperative language?" the answer goes beyond the (not inconsiderable) convenience of having a syntax designed around the style. no matter how referentially transparent all of your c code is, for example, the c compiler cannot take advantage of that fact, because it cannot, in general, rely on it. in a haskell compiler, however, that feature of the language is a significant source of optimization opportunities. the same goes for the type system. to implement higher-order functions and polymorphism in c or python, the burden is, in general, on the programmer to verify that the program remains well-typed. this commonly involves much wailing and gnashing of teeth, or, less commonly but more safely, potentially expensive runtime checks. in a functional language, the program is known to be typesafe at a relatively early phase of compilation, and the type system is well enough understood that any optimization the compiler performs will be type-preserving. so runtime checks are avoided, as is the gnashing of teeth. :) these facts, coupled with the fact that writing in a "functional style" in an imperative language in the most obvious way can be very bad for performance (e.g., too much recursion without tail-call optimization) motivates another very good reason to use a functional language when you want to write functional programs. to put it very simply, it's a case of choosing the best tool for the job. aside: does anyone know whether the same arguments apply to object-oriented programming? i know that people have, in the past, taken object design principles and applied them in a purely procedural language, claiming that the conveniences offered by a truly oo language were unnecessary. does a truly oo language not offer substantial opportunities for optimization, or is it a matter of oop still not resting on a sound enough theoretical framework? can anyone recommend any good books/papers on this? is cardelli and abadi's "theory of objects" a decent place to start? m -- matt hellige matt@immute.net http://matt.immute.net

I'd like to respectfully disagree with some of this :-) On Wed, 25 Jul 2001, Frank Atanassow wrote:
These things are nice, but the more I learn about functional languages, the more I feel that they are only icing on the cake. The most important thing about functional languages is that we know _what they are_, and that they can be characterized in a regular fashion which is amenable to analysis; in other words, we know how to reason about them, and what sorts of properties functional programs satisfy.
Umm, I wouldn't agree with that exactly. If you ignore stupidities like specifying the mod operator % to be implementation defined, I would claim that C has/can be given semantics which are virtually as simple as Haskell. Equally I'd argue that we know as much about how to reason about imperative languages as we do functional ones. The point is that those semantics don't split the problem of reasoning about what's going on into nice independent subproblems like the semantics of Haskell do. Equally, I'd agree we don't know how to perform _effective_ reasoning on them, probably because it's pretty much impossible without restricting the set of allowable programs.
Saying that a language "satisfies nice properties" sounds abstract and useless to some people who just want to get things done, but I really believe that it helps not only researchers who know know how to read and use formal semantics, but also everyday programmers, because they absorb some of these reasoning methods and then start using them intuitively and unconsciously. For example, every Haskell programmer has some grasp of referential transparency, and uses it implicitly every time he thinks about how to structure or rearrange his programs.
Again, as a C++ programmer I have some grasp of what program rearrangements are valid (E.g.,I can't move an assignment involving an expression in another variable, say v, from before an assignment to v to after an assignment to v), and I use it implicitly every time I think about structuring or rearranging my programs. The problem is that that doesn't really help because determining whether there's an assignment to variable v between where an assignment is now and where I want to move it to is `decidedly non-trivial'. What I'm basically trying to say is that I don't think it's `simplicity of semantics' that's the FP advantage but the _effectiveness in the world of people writing programs_ of those semantics. (Indeed I vaguely remember reading that John McCarthy didn't develop LISP as a computational realization of the existing theory of lambda calculus but rather because he thought programming with only functions would be an effective way of doing things. The `lambda' was an accident; a friend gave him a book on Church's theory late in the development and he admitted he didn't really understand it but thought the name lambda for the function definition construct was `cool'.) And that advantage comes partly by restricting yourself to programs that allow this kind of reasoning; this is no different from the fact that I won't drive after having drunk any alcohol even though I might well in actuallity still be safe because it's easier to follow that rule than to figure out if I'm genuinely safe.
Of course, this argument holds for any "simple" language, not just functional ones, where by "simple" I mean "easy to reason about". For
As I say `easy to _effectively_ reason about' =/= simple semantics. I'm making this point because the natural rejoinder from a C programmer being told that C (as an imperative language) has more complex semantics than Haskell is to say `no it doesn't', which I agree with, the point being that Haskell has a semantics which makes it easier to reason about effectively. (I'm glossing over the distinction between the various kinds of semantics for a language such as denotational and operational here.)
example, there are concurrent calculi of processes, object calculi and first-order algebraic languages which also share this property, and I prefer any of them to warty conventional languages. (And this is why I also like SML: even though it's not referentially transparent like Haskell, I know what sorts of properties it satisfies.)
Now this to me is a really interesting statement. IIRC from many years ago theres not necessarily an indication in the type system when a function involves references. Do you actually reason (albeit subconciously) using a full semantics which includes possible effects due to aliasing of references or changes to their contents somewhere deep in the functions, or do you use a rule `well this is 99% likely to be referentially transparent so I'll assume it is, but I'll keep in the back of my mind the possibility this assumption may be wrong'? (That's what I'd end up doing I think.)
Haskell (and ML) really is different. First, it's not implementation-defined.
Umm, again I'd say this is debatable: the volume of e-mail to/from Simon PJ about the haskell 98 report even though there are various Haskell 98 interpreters/compilers out there suggests this isn't true. Likewise, C and C++ have specifications which are met to a reasonable degree by many compilers out there, which prompts the question: when was the last time a bug in your code in an imperative language was due to an implmentation-defined part of the language? In 5 years of intensive C++ programming I'd guess I've made maybe 25 bugs due to this, but the number of bugs I've fixed in my code must be in the tens of thousands. FWIW, my take on the single biggest reason why I like and am more productive in situations where I can use a functional language over an imperative language is essentially being sheilded from temptation: because in a typical imperative language you tend to have to implement a case of `map f xs', for example, by a separate routine that knows what the types of f and xs are (and maybe even what f is) and where you have to explicitly manipulate things via temporary variables and array indices, there's an almost impossible urge (often even subconcious) to specialise and `optimise' things in ways which are in reality very error prone and difficult to understand. In contrast, not being able to do these sorts of nasty things in Haskell leads me to do things the next-simplest way, which is generally using higher order functions and the like (where things by definition can't go wrong in many of the ways of C). ___cheers,_dave________________________________________________________ www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law: however many computers email: tweed@cs.bris.ac.uk | you have, half your time is spent work tel: (0117) 954-5250 | waiting for compilations to finish.

David Tweed wrote:
I'd like to respectfully disagree with some of this :-)
I figured someone would. Though I thought it would be Fergus. :)
The most important thing about functional languages is that we know _what they are_, and that they can be characterized in a regular fashion which is amenable to analysis; in other words, we know how to reason about them, and what sorts of properties functional programs satisfy.
Umm, I wouldn't agree with that exactly. If you ignore stupidities like specifying the mod operator % to be implementation defined, I would claim that C has/can be given semantics which are virtually as simple as Haskell. Equally I'd argue that we know as much about how to reason about imperative languages as we do functional ones. The point is that those semantics don't split the problem of reasoning about what's going on into nice independent subproblems like the semantics of Haskell do. Equally, I'd agree we don't know how to perform _effective_ reasoning on them, probably because it's pretty much impossible without restricting the set of allowable programs.
Effectively reason, OK. This is what I meant when I wrote "amenable to analysis."
Again, as a C++ programmer I have some grasp of what program rearrangements are valid (E.g.,I can't move an assignment involving an expression in another variable, say v, from before an assignment to v to after an assignment to v), and I use it implicitly every time I think about structuring or rearranging my programs. The problem is that that doesn't really help because determining whether there's an assignment to variable v between where an assignment is now and where I want to move it to is `decidedly non-trivial'.
I think doing source-level reasoning on C++ programs is virtually impossible. I always have to think about the implementation in terms of memory blocks and vtables and pointers. One reason must be that assignment is used all over the place when local let-binding would suffice. If C++ actually enforced encapsulation of state, that would not be so bad, but in practice you never know what sort of aliasing and sharing is going on.
Of course, this argument holds for any "simple" language, not just functional ones, where by "simple" I mean "easy to reason about". For
As I say `easy to _effectively_ reason about' =/= simple semantics. I'm making this point because the natural rejoinder from a C programmer being told that C (as an imperative language) has more complex semantics than Haskell is to say `no it doesn't', which I agree with, the point being that Haskell has a semantics which makes it easier to reason about effectively. (I'm glossing over the distinction between the various kinds of semantics for a language such as denotational and operational here.)
Again, this is what I meant by "simple". Actually, I have to admit that when I think about Haskell programs, I don't really reason using Haskell's semantics, but rather the equational theory of a polymorphic lambda-calculus, and I hardly ever think about the operational side. If you idealized C in the same way, you would probably ignore things like pointers and aliasing which cause problems. But the thing is that in Haskell I can pretty much get away with this idealization and still write interesting programs, but in C you really need these complications to do useful things. So I think FP languages have a better separation of concerns.
example, there are concurrent calculi of processes, object calculi and first-order algebraic languages which also share this property, and I prefer any of them to warty conventional languages. (And this is why I also like SML: even though it's not referentially transparent like Haskell, I know what sorts of properties it satisfies.)
Now this to me is a really interesting statement. IIRC from many years ago theres not necessarily an indication in the type system when a function involves references. Do you actually reason (albeit subconciously) using a full semantics which includes possible effects due to aliasing of references or changes to their contents somewhere deep in the functions, or do you use a rule `well this is 99% likely to be referentially transparent so I'll assume it is, but I'll keep in the back of my mind the possibility this assumption may be wrong'? (That's what I'd end up doing I think.)
Frankly, I do my best to avoid side-effecting code in ML. The fact that you can do this and still write useful programs is, again, an indication that the separation of concerns is better than in C, where it is pretty much impossible to go without side-effects and aliasing. But when I have to use effects, I guess I make conservative judgements about substitutability and try to localize them as much as possible. I also have started thinking about it "monadically" to some extent, so, yes, I guess I am doing a bit of informal extra type inference in my head. Finally, ML is not completely oblivious to effects; for example, to preserve soundness, it disallows generalizing type variables in certain places which fail a syntactic condition, and that is something you can use yourself to figure out which expressions are likely to be impure.
Haskell (and ML) really is different. First, it's not implementation-defined.
Umm, again I'd say this is debatable: the volume of e-mail to/from Simon PJ about the haskell 98 report even though there are various Haskell 98 interpreters/compilers out there suggests this isn't true. Likewise, C and C++ have specifications which are met to a reasonable degree by many compilers out there, which prompts the question: when was the last time a bug in your code in an imperative language was due to an implmentation-defined part of the language? In 5 years of intensive C++ programming I'd guess I've made maybe 25 bugs due to this, but the number of bugs I've fixed in my code must be in the tens of thousands.
First, this is not what I mean by "implementation-defined" (my fault). What I mean is the model of computation: in C, you are always thinking about blocks on the heap, pointers between them, whether something is on the stack or not, what order things are happening in and what order this other code you wrote last year assumes them to be happening in, etc. So you are still thinking, essentially, at the level of the von Neumann architecture. Sure, some features which affect portability are hidden away, e.g., how long a word of memory is, but those are details, and you can name things and refer to them more nicely, but it's still just assembly programming in disguise. In an FP language, and more generally languages which support an abstract semantics, I don't have to think about those things. Instead, I can think about a problem at a whole different level of abstraction which is closer to the problem domain. Second, I am aware that this is an oversimplification and that C/C++ are not completely implementation-defined, and you can do some reasoning about sequence points and local stores and so on. But does that happen in practice? They just aren't taught that way, and that's also part of the problem. Third, I don't think it is true that there are many (any?) compilers which meet the C++ specification to a reasonable degree. C++ must be the single worst implemented language in computer history. --- Frank Atanassow, Information & Computing Sciences, Utrecht University Padualaan 14, PO Box 80.089, 3508TB Utrecht, The Netherlands Tel +31 (0)30 253-3261 Fax +31 (0)30 251-3791

matt heilige wrote:
this brings up another issue that has, up to this point, not been mentioned... the well-understood (and theoretically guaranteed) properties of functional languages allow compilers/interpreters to do some much smarter things with functional constructs... this allows very abstract language features to be implemented much more efficiently. to return to the original question: "why not just write in a functional style in an imperative language?" the answer goes beyond the (not inconsiderable) convenience of having a syntax designed around the style.
Good point, and this reminds me of another point I wanted to mention. You can replicate functional features in conventional languages, but you lose another benefit of typed functional programming: machined-checked documentation of your code. When I write a program, I typically know much, much more about that program than what is expressed in the source code I write. For example, in C I may know that two procedures do not interfere, or that they are re-entrant, or that a variable is never aliased. Sometimes knowing these facts is crucial to maintaining the program. Then you have to make sure that it gets communicated to the maintainer by stating it somewhere in the documentation. And you have to make sure that what you're saying is actually correct, or you end up causing even more confusion. So keeping accurate and up-to-date documentation is a big part of the task of writing a C program. (I think for C++ it's probably even more important because you need to define for each method not only what it does, but what it should do in the future, so that when it's overridden in a derived class, that contract is followed.) By writing a program in a typed functional language, you are proving facts about your program, facts which are encoded in an apparent manner in the source code itself, and not some external documentation. You are proving that disjoint programs do not interfere, that order of evaluation doesn't matter, whether they (reduce to programs which) have effects, etc. There's also safety, and "theorems for free". Then there are other properties which are obvious (to a programmer) in a Haskell program which get buried in the equivalent C(++) program, e.g., that every member of a data structure is traversed in a fold ("no early exits"). Many of these things hinge on the typing of a program, which is inferred and checked for you, so there is less of a burden on conscientious programmers. These facts always get driven home to me when I program in C or even untyped functional languages like Scheme. I always seem to end up writing down in documentation, stuff that I would leave implicit in a typed language, because its apparent from the source code, and/or automatically checkable by merely compiling the source code. --- Frank Atanassow, Information & Computing Sciences, Utrecht University Padualaan 14, PO Box 80.089, 3508TB Utrecht, The Netherlands Tel +31 (0)30 253-3261 Fax +31 (0)30 251-3791

On Thu, 26 Jul 2001, Frank Atanassow wrote:
Again, as a C++ programmer I have some grasp of what program rearrangements are valid (E.g.,I can't move an assignment involving an expression in another variable, say v, from before an assignment to v to after an assignment to v), and I use it implicitly every time I think about structuring or rearranging my programs. The problem is that that doesn't really help because determining whether there's an assignment to variable v between where an assignment is now and where I want to move it to is `decidedly non-trivial'.
I think doing source-level reasoning on C++ programs is virtually impossible. I always have to think about the implementation in terms of memory blocks and vtables and pointers.
One reason must be that assignment is used all over the place when local let-binding would suffice. If C++ actually enforced encapsulation of state, that would not be so bad, but in practice you never know what sort of aliasing and sharing is going on.
Note: I'm trying to compare like-with-like, so when I talk about reasoning below it's in the context of the same sort of semi-formal, in my head reasoning that I do in my everyday development of C++ or Haskell. Yes, I guess it's time for a confession: I'm making a rather sweeping assumption that the patterns in which I do and don't program are in some way `average' or `typical', even though they probably aren't. For instance, I don't even use patterns like `a[b++]=c;' just because it requires too many `mental cycles' for me to figure out what's going on when I reread the code; most other C programmers probably would use that construct, I expect. Having said that, within this sort of style I find that I don't really (have to) go beyond the idea of a machine with a store of named variables, some of which are actually arrays or have other structure, and tend to use the simple rule `if you assign/read outside array bounds or to a pointer location which does not map within a store defined variable/variable array, the program may (in a way which is so dependent on history as to be effectively non-determinisitic) either place gibberish in any other stored variable cell, return gibberish, return sensible values and have no observable ill effects or seg-fault'. Whilst you're certainly right to say that a proper semantic model (even an informal one) ought to incorporate the differences between stack allocated variables and heap allocated variables I genuinely don't think I use that distinction whilst thinking about my programs because it's too complicated for me. Likewise I've had one bug in 5 years due to a vtable problem and that was when I knew I was pushing my luck trying to do a really low-level optimisation for efficiency. The bugs that I have tend to fall into three big categories: incorrect algorithms (i.e., I'd misunderstood the problem even at a high level), writing outside array bounds and misaccounting for the assignments and their ordering. Class 1 is a problem even when I write in Haskell, classes 2 and 3 tend to be analysed by sticking print outputs everywhere, and then reasoning using my very simple mental model to figure out why the observed behaviour is happening and fix it. The reason that I'm belabouring this point is that I don't think that imperative programmers are going to be convinced by an argument that `functional programming is more effective because there are a huge number of complexities in the semantics, and even some downright ill-definedness/inconsistency, compared to functional programming' because even my natural response is `that's entirely true about the semantics but, _given that those problems all tend to lie in areas of the "program state space" where I don't write programs_, why should that motivate me to use a functional language?' I still think the compelling argument is not that there are all these nasty tar-pits of complexity in the semantics of an imperative language, say C++, that aren't there in functional languages but that, in the code which makes up 99.99% of imperative programs and where most of the bugs occur, there are simple semantics you just can't use them in any effective way. In contrast you _can_ do effective reasoning about the same sorts of thing in functional languages because their roughly equally simple semantics are more effective.
Actually, I have to admit that when I think about Haskell programs, I don't really reason using Haskell's semantics, but rather the equational theory of a polymorphic lambda-calculus, and I hardly ever think about the operational side. If you idealized C in the same way, you would probably ignore things like pointers and aliasing which cause problems.
Umm, as I stated above, I think I use a very simple model that lets me think about aliasing and pointers. I guess the key point is that I don't try and reason about what goes wrong in detail when `undefined pointer access occurs', merely that all sorts of nasty things can happen unpredictably.
First, this is not what I mean by "implementation-defined" (my fault). What I mean is the model of computation: in C, you are always thinking about blocks on the heap, pointers between them, whether something is on the stack or not, what order things are happening in and what order this other code you wrote last year assumes them to be happening in, etc. So you are still thinking, essentially, at the level of the von Neumann architecture.
Again I agree in principle but I don't see that reasoning using a dentotational semantics for Haskell is less `implementation defined' than thinking about a C program in terms of local stores and sequence points; arguably when you're using your model of Haskell you're thinking at the level of a non-deterministic-rewriting-machine architecture :-) .
Sure, some features which affect portability are hidden away, e.g., how long a word of memory is, but those are details, and you can name things and refer to them more nicely, but it's still just assembly programming in disguise. In an FP language, and more generally languages which support an abstract semantics, I don't have to think about those things. Instead, I can think about a problem at a whole different level of abstraction which is closer to the problem domain.
Again: agree in principle; in practice
Third, I don't think it is true that there are many (any?) compilers which meet the C++ specification to a reasonable degree. C++ must be the single worst implemented language in computer history.
If you assume that, in any given languatge, all programs in that language are equally likely to be written then I'd agree that C++ would easily give you the highest ratio of rejected/implementation defined programs of any language when you consider a random sample of programs over this distribution. What I think is a more interesting question is: I'm a (cautious) human being who doesn't produce programs from all over the program state space but clustered in certain areas that are both useful and more understandable. That being the case, is the above ratio now significantly higher than for other languages? In my (very limited experience) it doesn't seem that much higher. I still think trying to `sell' FP on the basis of huge problems with imperative languages that largely don't affect programmers because they don't write programs that involve these problems is a mistake; it's the fact that even in the nice, well defined area of imperative program where most programmers live things are much more complicated than in a functional language that makes more sense. ___cheers,_dave________________________________________________________ www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law: however many computers email: tweed@cs.bris.ac.uk | you have, half your time is spent work tel: (0117) 954-5250 | waiting for compilations to finish.

also safety, and "theorems for free". Then there are other properties which are obvious (to a programmer) in a Haskell program which get buried in
equivalent C(++) program, e.g., that every member of a data structure is traversed in a fold ("no early exits"). Many of these things hinge on
On Thu, 26 Jul 2001, Frank Atanassow wrote: the the
typing of a program, which is inferred and checked for you, so there is less of a burden on conscientious programmers.
I'm being terribly unfair here; this was probably just a simple slip when writing a hurried e-mail but if you mean what I think you mean about the fold: undefd = undefd f y x|y=='a' = "finished" |otherwise = y:x g xs = foldr f "" xs Main> g ('a':undefd) "finished" shows that folds can exit early; if it didn't it'd black hole forever. ___cheers,_dave________________________________________________________ www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law: however many computers email: tweed@cs.bris.ac.uk | you have, half your time is spent work tel: (0117) 954-5250 | waiting for compilations to finish.

I threw this example out:
every member of a data structure is traversed in a fold ("no early exits")
D. Tweed wrote:
I'm being terribly unfair here; this was probably just a simple slip when writing a hurried e-mail but if you mean what I think you mean about the fold:
undefd = undefd
f y x|y=='a' = "finished" |otherwise = y:x
g xs = foldr f "" xs
Main> g ('a':undefd) "finished"
shows that folds can exit early; if it didn't it'd black hole forever.
Hmm, is that a counterexample? That list has one member, 'a', which is indeed traversed... But that's a consequence of the linearity of lists. If you defined a fold over trees and then tried to evaluate n: data Tree a = Fork (Tree a) (Tree a) | Leaf a fold fork leaf (Fork t t') = fork (fold fork leaf t) (fold fork leaf t') fold fork leaf (Leaf a) = leaf a n = fold max id (Fork undefined (Leaf 5)) you would indeed miss a member. --- Frank Atanassow, Information & Computing Sciences, Utrecht University Padualaan 14, PO Box 80.089, 3508TB Utrecht, The Netherlands Tel +31 (0)30 253-3261 Fax +31 (0)30 251-3791

D. Tweed wrote:
Yes, I guess it's time for a confession: I'm making a rather sweeping assumption that the patterns in which I do and don't program are in some way `average' or `typical', even though they probably aren't. For instance, I don't even use patterns like `a[b++]=c;' just because it requires too many `mental cycles' for me to figure out what's going on when I reread the code; most other C programmers probably would use that construct, I expect. Having said that, within this sort of style I find that I don't really (have to) go beyond the idea of a machine with a store of named variables, some of which are actually arrays or have other structure, and tend to use the simple rule `if you assign/read outside array bounds or to a pointer location which does not map within a store defined variable/variable array, the program may (in a way which is so dependent on history as to be effectively non-determinisitic) either place gibberish in any other stored variable cell, return gibberish, return sensible values and have no observable ill effects or seg-fault'.
My reaction to that is: you are not programming in C. If you restrict yourself to nice subsets of a programming language, then obviously your programs will satisfy better properties. Viewed in this way, Haskell is a (completely different) subset of C. But of course you lose the opportunity for the compiler and standard tools (which includes the standard semantics) to take advantage of these nicer properties, and are justified then in complaining to the language designers, "Hey, why did you put all this cruft in there? I don't need it, and it makes it harder to use the language!" The point of having a language, IMO, is that its laws govern, _universally_, all the programs you can write in it. Otherwise you would just say, "Yeah, C is really hard to reason about, but if you consider all my C programs _individually_ the sublanguages they are written in actually satisfy nice properties." So, in the limit you might specialize your `language' differently to every single program you write. By that token, then, all programming languages become equal, and we have reduced this discussion to triviality. I am not denying that you can have a nice imperative language (although I think that `just' a global store is too unstructured to support good metaproperties for reasoning). I think I even said something to that effect, that it's not the paradigm which matters, but rather the existence of nice properties, or effectively simple semantics, as you said. But C is not such a language. --- Frank Atanassow, Information & Computing Sciences, Utrecht University Padualaan 14, PO Box 80.089, 3508TB Utrecht, The Netherlands Tel +31 (0)30 253-3261 Fax +31 (0)30 251-3791

On Thu, 26 Jul 2001, Frank Atanassow wrote:
My reaction to that is: you are not programming in C. If you restrict yourself to nice subsets of a programming language, then obviously your programs will satisfy better properties.
That's certainly a resaonable position to take. All I'm saying is that in a purely pragmatic sense, if we say that I'm not programming in C, what proportion of the people out there who compile their programs with C compilers aren't programming in C either? I still think that, from the purely pragmatic point of view of giving arguments as to why someone using an imperative language (in the way that people actually do use those languages, _rather than in a way that they conceivably could use them_,) would be better off using a functional language it's more convincing to argue about the problems that _actually do affect them_ and can be handled better in a functional language than to talk about the extreme problems that very rarely occur in practice.
Viewed in this way, Haskell is a (completely different) subset of C. But of course you lose the opportunity for the compiler and standard tools (which includes the standard semantics) to take advantage of these nicer properties, and are justified then in complaining to the language designers, "Hey, why did you put all this cruft in there? I don't need it, and it makes it harder to use the language!"
I've never written a Haskell program using functional dependencies, or existential classes, ...
The point of having a language, IMO, is that its laws govern, _universally_, all the programs you can write in it. Otherwise you would just say, "Yeah, C is really hard to reason about, but if you consider all my C programs _individually_ the sublanguages they are written in actually satisfy nice properties."
Clearly that would be true, but there's a really extreme clumpiness in the distribution over the state space: virtually all my programs are written in the same sublanguage.
So, in the limit you might specialize your `language' differently to every single program you write. By that token, then, all programming languages become equal, and we have reduced this discussion to triviality.
That sounds nice in principle. However, I don't think that there are in the actual world out there enough coders to write `imperative language with scalar variables', `imperative language with scalar variables and subroutines' (language A), `imperative language with scalar variables, subroutines, and structured values' (language B), ... Equally, I don't want to write a program in language A and then, on the day I decide it's getting so complicated I need structures, have to rewrite it in language B. In reality every language out there is a compromise between nice features, having a feasibly sized user base and tool-writing base, wrangling on standards comittees and just plain luck/human stupidity/etc. Given that this is the case and that I've decided that, for whatever reason, I'm writing code that will be compiled using a C compiler, do I face lots of problems and bugs due to all the weird and excessive `cruft' in the C language? I still honestly believe that I don't: most of the problems that I face would be exactly the same in a much simpler imperative language. And I think they're fundamentally due to the imperative assignment has a (simple to state and understand) semantics which simpy cannot be used to reason effectively about programs. (BTW, your point about tools being potential better in the simpler language is taken, although I still doubt that there's much that they aren't doing already that would suddenly become tractable in such a language.)
I am not denying that you can have a nice imperative language (although I think that `just' a global store is too unstructured to support good metaproperties for reasoning).
Ah, that's strange because I _am_ asserting that. What I'm asserting is that it's the very fundamental concepts like assignment to variables that make effective reasoning very difficult for most of the bugs that actually occur in the code that people write.

D. Tweed wrote:
On Thu, 26 Jul 2001, Frank Atanassow wrote:
My reaction to that is: you are not programming in C. If you restrict yourself to nice subsets of a programming language, then obviously your programs will satisfy better properties.
That's certainly a resaonable position to take. All I'm saying is that in a purely pragmatic sense, if we say that I'm not programming in C, what proportion of the people out there who compile their programs with C compilers aren't programming in C either?
Interesting point. That says something about C.
I still think that, from the purely pragmatic point of view of giving arguments as to why someone using an imperative language (in the way that people actually do use those languages, _rather than in a way that they conceivably could use them_,) would be better off using a functional language it's more convincing to argue about the problems that _actually do affect them_ and can be handled better in a functional language than to talk about the extreme problems that very rarely occur in practice.
I could agree that that follows, but I have no evidence that all or even most C programmers program in the same restricted subset, or any proper subset at all.
I've never written a Haskell program using functional dependencies, or existential classes, ...
I find them indispensible, and I know for a fact that I am not the only one around our office who feels that way. Though, the people around here (Utrecht's software technology group) are not exactly typical programmers... :)
Clearly that would be true, but there's a really extreme clumpiness in the distribution over the state space: virtually all my programs are written in the same sublanguage.
I can believe this holds for your programs, but, uh... your clumps are not necessarily in the same place as other people's clumps. Please don't misinterpret that. :)
So, in the limit you might specialize your `language' differently to every single program you write. By that token, then, all programming languages become equal, and we have reduced this discussion to triviality.
That sounds nice in principle.
<sigh> Everything I say seems to sound nice in principle. Maybe I am just a hopeless idealist... Anyway, the point of my quoted remark above was that I would like to have more concrete boundaries for evaluating languages. I was trying to demonstrate that equating the properties of a language with the properties of any of its subsets is nihilistic. I gather that your claim is that most C programmers use the same subset, or at least one among a family of subsets.
do I face lots of problems and bugs due to all the weird and excessive `cruft' in the C language? I still honestly believe that I don't: most of the problems that I face would be exactly the same in a much simpler imperative language. And I think they're fundamentally due to the imperative assignment has a (simple to state and understand) semantics which simpy cannot be used to reason effectively about programs.
Hallelujah! I understand what you're saying now! You're saying that C is bad, not because of the cruft, but because of assignment. Correct?
I am not denying that you can have a nice imperative language (although I think that `just' a global store is too unstructured to support good metaproperties for reasoning).
Ah, that's strange because I _am_ asserting that. What I'm asserting is that it's the very fundamental concepts like assignment to variables that make effective reasoning very difficult for most of the bugs that actually occur in the code that people write.
OK, I understand. I used to share this view, and I agree except that I don't think assignment is bad, only unrestricted assignment on a global store. What convinced me is this paper, which you should read: @article{ reddy96global, author = "Uday S. Reddy", title = "Global State Considered Unnecessary: An Introduction to Object-Based Semantics", journal = "Lisp and Symbolic Computation", volume = "9", number = "1", pages = "7--76", year = "1996" } You can get it here: ftp://cs.uiuc.edu/pub/faculty/reddy/papers/global.object.ps.Z --- Frank Atanassow, Information & Computing Sciences, Utrecht University Padualaan 14, PO Box 80.089, 3508TB Utrecht, The Netherlands Tel +31 (0)30 253-3261 Fax +31 (0)30 251-3791

Aha, we head towards convergence :-) On Thu, 26 Jul 2001, Frank Atanassow wrote:
I've never written a Haskell program using functional dependencies, or existential classes, ...
I find them indispensible, and I know for a fact that I am not the only one around our office who feels that way. Though, the people around here (Utrecht's software technology group) are not exactly typical programmers... :)
I'm sure that lots of people do, and maybe once I find time to get my head around them I'll find them insdispensible too. I was just trying to argue (and cheating a bit because they're not actually part of the language as defined by the latest `written', as opposed to de-facto, standard) that the fact that I don't use them doesn't seem to me to be a reason for me to step back from the latest hugs ang ghc and use, say, the Haskell 1.3 language and the compilers/interpreters from that date which is much closer to the `Haskell sub-language' that I write programs in. Yet you seemed (as far as I could see) to be suggesting that if I wasn't going to use everything available in C I ought to change to a simpler language where I did use everything.
I can believe this holds for your programs, but, uh... your clumps are not necessarily in the same place as other people's clumps.
Please don't misinterpret that. :)
:-)
do I face lots of problems and bugs due to all the weird and excessive `cruft' in the C language? I still honestly believe that I don't: most of the problems that I face would be exactly the same in a much simpler imperative language. And I think they're fundamentally due to the imperative assignment has a (simple to state and understand) semantics which simpy cannot be used to reason effectively about programs.
Hallelujah! I understand what you're saying now!
You're saying that C is bad, not because of the cruft, but because of assignment. Correct?
Yes, I'm saying that defining `bad' to mean `causes problems and bugs in the programs that I write' (clearly there could be other definitions appropriate in other contexts) I thinks that's 99.9% due to assignment.
OK, I understand. I used to share this view, and I agree except that I don't think assignment is bad, only unrestricted assignment on a global store. What convinced me is this paper, which you should read:
Thanks, I'll be very interested to read this. ___cheers,_dave________________________________________________________ www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law: however many computers email: tweed@cs.bris.ac.uk | you have, half your time is spent work tel: (0117) 954-5250 | waiting for compilations to finish.

On 26-Jul-2001, Frank Atanassow
David Tweed wrote:
I'd like to respectfully disagree with some of this :-)
I figured someone would. Though I thought it would be Fergus. :)
Hey, give me time ;-) I've been on holiday for a while and I'm only just
catching up with haskell-cafe now!
--
Fergus Henderson

On 25-Jul-2001, D. Tweed
Umm, I wouldn't agree with that exactly. If you ignore stupidities like specifying the mod operator % to be implementation defined, I would claim that C has/can be given semantics which are virtually as simple as Haskell.
But there are so *many* such "stupidities". If you took out all of the things in C that had implementation-defined, unspecified, or undefined behaviour, then what you'd be left with wouldn't be C. If you were talking about some other C-like language, such as LP-C, Java, or C#, then I think your point might be valid. But the differences between C and higher-level imperative (or OOP) languages should not be ignored.
Umm, again I'd say this is debatable: the volume of e-mail to/from Simon PJ about the haskell 98 report even though there are various Haskell 98 interpreters/compilers out there suggests this isn't true.
I think almost all of the discussion about the Haskell Report relates to either (1) library design or (2) questions that affect program legality, not program behaviour. It's very very rare for two Haskell implementations to both accept the same program but for the program to behave differently on the different implementations.
Likewise, C and C++ have specifications which are met to a reasonable degree by many compilers out there, which prompts the question: when was the last time a bug in your code in an imperative language was due to an implmentation-defined part of the language? In 5 years of intensive C++ programming I'd guess I've made maybe 25 bugs due to this, but the number of bugs I've fixed in my code must be in the tens of thousands.
If you also include areas which are unspecified or undefined, as well
as those that are implementation-defined, I think it would cover a
very substantial proportion of those tens of thousands of bugs.
How many times have you written code that accidentally referenced
an uninitialized variable, for example? Or that accessed memory
after it had already been deallocated? Or that tried to access
past array bounds? These are very common kinds of errors.
--
Fergus Henderson

On 25-Jul-2001, matt hellige
agreed, and this brings up another issue that has, up to this point, not been mentioned... the well-understood (and theoretically guaranteed) properties of functional languages allow compilers/interpreters to do some much smarter things with functional constructs... this allows very abstract language features to be implemented much more efficiently.
That's nice, but it's not essential. I think the effect here is sometimes exaggerated. Could you be more specific about exactly which kinds of optimizations you are referring to here? If/when multiple-CPU machines become common, so that automatic parallelization is a serious issue, then it will be much more important. But currently the optimizations you're talking about don't make a huge difference, IMHO. Let's see, there's (a) better opportunities for optimizing away code whose results are not used (lazy evaluation, and related optimizations); (b) better opportunities for reordering code; (c) advantages from knowing that data is immutable; and (d) better opportunities for avoiding heap allocation because there are no problems with object identity or mutability. (E.g. this allows optimization such as static allocation of constant terms, common sub-expression elimination, and deforestation.) I think (b) and (c) rarely make substantial differences. (a) and (d) can be very important, but in those cases if you're programming in an imperative language you can always do the optimization manually.
to return to the original question: "why not just write in a functional style in an imperative language?" the answer goes beyond the (not inconsiderable) convenience of having a syntax designed around the style.
I think this is a lot more important than the optimization issues that you mentioned above. And while syntax is important, it's not just a matter of the syntax; semantic issues such as avoiding undefined/unspecified/ implementation-defined behaviour are important too, and there's also the standard library to consider. If you're going to be programming in a functional style it helps a lot to have a standard library that can support that style well.
aside: does anyone know whether the same arguments apply to object-oriented programming?
IMHO, yes, they do.
i know that people have, in the past, taken object design principles and applied them in a purely procedural language, claiming that the conveniences offered by a truly oo language were unnecessary.
It's true that these conveniences aren't necessary. But if you're planning to write a lot of code in an OOP style, they are certainly useful. I think this argument has mainly been heard in cases where there were other advantages to sticking with the procedural language.
does a truly oo language not offer substantial opportunities for optimization,
No, there are definitely some important optimizations that can be applied to OOP languages which are hard to apply to OOP programs written in procedural languages. In particular inlining of virtual method calls is one such important optimization, but this is more difficult to do in most procedural languages because there is no guarantee that the vtable remains constant.
can anyone recommend any good books/papers on this?
I suggest looking at papers on optimization of OOP programs,
e.g. the papers on the Cecil/Vortex project
http://www.cs.washington.edu/research/projects/cecil/.
Probably a browse through the proceedings of OOPSLA would
find lots of other relevant references.
--
Fergus Henderson

Important confession since Fergus is in the discussion: I've not actually read any of the C or C++ standards; I've got an impression of what they say from various textbooks and the gcc mailing lists. On Fri, 27 Jul 2001, Fergus Henderson wrote:
But there are so *many* such "stupidities".
If you took out all of the things in C that had implementation-defined, unspecified, or undefined behaviour, then what you'd be left with wouldn't be C.
If you were talking about some other C-like language, such as LP-C, Java, or C#, then I think your point might be valid. But the differences between C and higher-level imperative (or OOP) languages should not be ignored.
I'm sure you're right. However, this seems to fit paradoxically with my personal experience, and that of the people that I discuss this stuff with at work. This may be something to do with the fact that we're developing programs of the scientific kind (e.g., image segmentation) and that virtually everyone tends to unconsciously follow the KISS (Keep it simple, stupid) principle: our goal is very much to produce programs that `do cool things' rather than `are cool due to the way they are coded'. And I _still_ don't think that there are that many bugs that hit me due to implementation definedness or weird corner-cases in the syntax. If you had a time machine, took a piece of code that I'll write in the future with a bug in it and asked me to figure out what the bug in there was I bet that, in most cases given just enough time, patience, paper and coffee I could find the bug without a computer and it wouldn't be involve the compiler defining something one way whereas I thought the standard defined it another. I generally don't produce programs with bugs because the compiler implements some particular construct differently to the way that I think it sould but because I can't keep the whole program and the interactions between all its parts straight in my very limited mind. (I feel like a bee-keeper at apocryphal dinner party where it's said that by the laws of physics bees can't fly: all the other parties in this discussion are much more technically competent and well-read than me and yet my actual experiences don't seem to accord with their conclusions :-) )
Likewise, C and C++ have specifications which are met to a reasonable degree by many compilers out there, which prompts the question: when was the last time a bug in your code in an imperative language was due to an implmentation-defined part of the language? In 5 years of intensive C++ programming I'd guess I've made maybe 25 bugs due to this, but the number of bugs I've fixed in my code must be in the tens of thousands.
If you also include areas which are unspecified or undefined, as well as those that are implementation-defined, I think it would cover a very substantial proportion of those tens of thousands of bugs.
How many times have you written code that accidentally referenced an uninitialized variable, for example? Or that accessed memory after it had already been deallocated? Or that tried to access past array bounds? These are very common kinds of errors.
I haven't been very precise in my terminology (and as I say haven't actually read the standards). When I talk about something being implementation defined I mean that the precise details of how it is defined in an implementation needs to be understood for a `bug-free' program to work. I've been calling a statement (which I'm not claiming to have read verbatim anywhere) like `accessing memory outside allocated array bounds can non-determinisitically corrupt any part of the overall state of the program, return correct or gibberish results or cause the program to abort' to be perfectly reasonable defined statement of what to expect. It can certainly be viewed as either `under'-defined, or that what actually happens to be implementation defined. But I don't consider that when I screw up by allocating memory outside array bounds to be a bug due to implementation-definedness, it's a bug because I didn't spot some interaction that caused the variable that I was accessing the array at to become a value that was to big. But to some extent I've been arguing about C just because I think I'm right rather than because I think it's important to considering the overall question. Suppose that I'm working in Java. I'm told that if an access is made to a index outside the range of an array it throws an exception. How would you classify the following situation: * within some routine f: * a variable v is set up which holds the (valid) index at which I want to access the array at the start of the routine. * somewhere deep in a lot of code something ERRONEOUSLY assigns a value to that variable v which is outside the range of the array. * a read of the array at index v is attempted. An exception is thrown. * an catch block governing f catches the exception and prints out `Attempt to access array outside its range.' Nothing here is talking about unspecified semantics, or implementation definedness. Has the actual problem which is causing my program to not be useable for its intended purpose (`the bug') gone away? Is the bug fundamentally different to what it would be in a C implementation? If you were asked what the cause of the bug was, would you say that it was anything other than the erroneous assignment to v? Basically my argument was that in an imperative language the semantics do not generally cause the problem of figuring out why v has the wrong value (or equivalently, being sure when I wrote the code that v has the correct value) to fall apart into sufficiently small problems that effective reasoning becomes possible. On the other hand I believe the semantics of functional languages _do_ have this `meta-property'. ___cheers,_dave________________________________________________________ www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law: however many computers email: tweed@cs.bris.ac.uk | you have, half your time is spent work tel: (0117) 954-5250 | waiting for compilations to finish.

To correct the incoimprehensible sentence... On Fri, 27 Jul 2001, D. Tweed wrote:
actually happens to be implementation defined. But I don't consider that when I screw up by allocating memory outside array bounds to be a bug due ^^^^^^^^^ accessing to implementation-definedness, it's a bug because I didn't spot some interaction that caused the variable that ^ I was accessing the array at to | held the index
become a value that was to big.
___cheers,_dave________________________________________________________ www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law: however many computers email: tweed@cs.bris.ac.uk | you have, half your time is spent work tel: (0117) 954-5250 | waiting for compilations to finish.

[Fergus Henderson
Could you be more specific about exactly which kinds of optimizations you are referring to here?
If/when multiple-CPU machines become common, so that automatic parallelization is a serious issue, then it will be much more important. But currently the optimizations you're talking about don't make a huge difference, IMHO. Let's see, there's
(a) better opportunities for optimizing away code whose results are not used (lazy evaluation, and related optimizations); (b) better opportunities for reordering code; (c) advantages from knowing that data is immutable; and (d) better opportunities for avoiding heap allocation because there are no problems with object identity or mutability. (E.g. this allows optimization such as static allocation of constant terms, common sub-expression elimination, and deforestation.)
I think (b) and (c) rarely make substantial differences. (a) and (d) can be very important, but in those cases if you're programming in an imperative language you can always do the optimization manually.
i think we're in agreement, but you said it better... :) i was only really talking about optimizations which are related to cases which are very common in a functional language... for example, in a functional style one is encouraged to create and apply numerous small functions, as well as to make heavy use of recursion. so compiler writers have focused a great deal of effort on the efficient implementation of these features. in an imperative language, on the other hand, those features are generally considered to be less important, and often are recognized performance hits. so if you write in a functional style in c, for example, and make very heavy use of recursion, you can get in deep trouble! of course, as you say, these optimizations can always be done by hand, but usually it involves "optimizing" from a functional style to an imperative style (e.g., converting recursion to iteration), so then what's the point? my point about the semantics of the language making these optimizations possible was, i think, oblique to my main point, and perhaps only muddied the waters... m -- matt hellige matt@immute.net http://matt.immute.net

D Tweed
(I feel like a bee-keeper at apocryphal dinner party where it's said that by the laws of physics bees can't fly: all the other parties in this discussion are much more technically competent and well-read than me and yet my actual experiences don't seem to accord with their conclusions :-) )
As a fellow bee-keeper, let me say I'm in complete agreement with you. The problem is not that the semantics of C is incomplete or that the semantics of C is complex but that it is hard to use effectively. I've talked with many non FPers about the relative merits of C and Haskell (or ML) and found that: 1) Arguments that C's semantics are complex are so implausible or abstruse ("honest, you don't want to go messing with those assignment statements - they'll just make your life miserable") destroy my credibility. 2) Arguments that C is incompletely specified lead to exactly the same arguments we've seen in this discussion about their being a portable subset which suffices for most code we write. So, while there's an element of truth in both arguments, it is completely unproductive to use these arguments. Much better is to concentrate on Haskell's strengths than on C's weaknesses. A great starting list is John Hughes' "Why FP matters". Then, depending on audience, show them list comprehensions, throw in Phil Wadler's "Theorems for Free", talk about equational reasoning, show them what Haskell's standard libraries provide (contrast with C and C++ provide or how useful their libraries are in practice), talk about embedded domain specific languages (parser combinators, Fran, etc.), show them a binary tree implementation, show them a complete compiler written in N lines of code, show them Haskell programs with GUI's (to dispel the belief that Haskell can't interact with the outside world or that Haskell can't use existing libraries written in other languages), show them FFT written in Squiggol, show them a derivation of FFT (from the Discrete Fourier Transform) in Squiggol, ... At least half of these rely on Haskell's simple semantics. (This is indirect in the case of the standard libraries - the simple semantics means that I don't need multiple versions to deal with different kinds of side effect or exception-throwing behaviour). I also talk frankly about the relatively weak state of debugging support and the fact that I have no effective way of reasoning about space usage or execution time in Haskell programs (just tools for examining it empirically). (These are significant obstacles since I currently work in an OS group with a bias towards Real Time Embedded Systems :-). Fortunately, they let me write my compiler in Haskell.) -- Alastair Reid reid@cs.utah.edu http://www.cs.utah.edu/~reid/

Hi, Frank Atanassow wrote:
D. Tweed wrote:
I've never written a Haskell program using functional dependencies, or existential classes, ...
I find them indispensible, and I know for a fact that I am not the only one around our office who feels that way. Though, the people around here (Utrecht's software technology group) are not exactly typical programmers... :)
I've been recently experimenting quite a bit with existential classes and somewhat less with functional dependencies, primarily to help my understanding of the concepts. However, I've yet to be able to think of an appropriate place to use them in the "real world". That is, in something more than a toy thought-experiment. Could you give some examples of what you are using them for? -- Hal Daume III "Computer science is no more about computers | hdaume@isi.edu than astronomy is about telescopes." -Dijkstra | www.isi.edu/~hdaume

As a followup to my own mail about productive ways to discuss Haskell with C programmers, here's some mail sent to me by a colleague who normally hacks on operating systems in C but was trying out the C-Kit (a very cool C parsing and typechecking library unfortunately written in SML). It's worth knowing that we'd recently been reading a lot of papers about efficiently detecting errors in C code (think "lint on crack").
So I like to joke you about functional language people creating an infinite research program with respect to program optimization, but after spending some time writing SML I think maybe it's the imperative people creating an infinite research program with respect to making it easy to develop non-buggy code...
-- Alastair Reid reid@cs.utah.edu http://www.cs.utah.edu/~reid/
participants (6)
-
Alastair David Reid
-
D. Tweed
-
Fergus Henderson
-
Frank Atanassow
-
Hal Daume
-
matt hellige