
One of the things I liked a lot when working with C# was that as soon as my code compiled, it usually worked after an iteration of two.At least if we forget about the nasty imperative debugging that is needed after a while because of unanticipated and unchecked runtime side effects. After heaving read about Haskell and having written some small programs for the last year or so, I'm now finally writing a bigger program with it. It is not so easy yet since learning a language and trying to reach a deadline at the same time is hard :) However, it is just amazing that whenever my Haskell program compiles (which to be fair can take a while for an average Haskeller like me ;-), it just... works! I have heard rumors that this was the case, but I can really confirm it. A bit hurray for strong typing! (and if you don't like it, you can still use Dynamic and Typeable ;-)

On Sun, Feb 15, 2009 at 12:51:38AM +0100, Peter Verswyvelen wrote:
However, it is just amazing that whenever my Haskell program compiles (which to be fair can take a while for an average Haskeller like me ;-), it just... works! I have heard rumors that this was the case, but I can really confirm it.
Indeed. You have to be careful or you will start expecting your perl scripts to just work if your editor is capable of saving the file to disk. Which can lead to all sorts of trouble :) John -- John Meacham - ⑆repetae.net⑆john⑈

I have been learning Haskell for the last two weeks and was relaying that
exact benefit to my friend in attempts to convert him. I spend 3 hours
getting a few functions to compile, but when they do, they just work. Every
time.
2009/2/14 Peter Verswyvelen
One of the things I liked a lot when working with C# was that as soon as my code compiled, it usually worked after an iteration of two.At least if we forget about the nasty imperative debugging that is needed after a while because of unanticipated and unchecked runtime side effects. After heaving read about Haskell and having written some small programs for the last year or so, I'm now finally writing a bigger program with it. It is not so easy yet since learning a language and trying to reach a deadline at the same time is hard :)
However, it is just amazing that whenever my Haskell program compiles (which to be fair can take a while for an average Haskeller like me ;-), it just... works! I have heard rumors that this was the case, but I can really confirm it.
A bit hurray for strong typing! (and if you don't like it, you can still use Dynamic and Typeable ;-)
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- We can't solve problems by using the same kind of thinking we used when we created them. - A. Einstein

As this topic popped out, my secrets for programming in Haskell are three words: assert, HUnit, QuickCheck. - Create internal functions that verify the results of the exported ones, or maybe an easier to verify implementation that is slower, and put them on assert's. This has saved me a few times. - Create HUnit tests before writing the function itself (but after writing its type), and automate the execution (maybe with test-framework-hunit). This way you can test your implementation after type-checking, optimizations and/or fixed bugs. - If you can write down properties and Arbitrary's easily, do so and wire everything with the HUnit tests. I don't use QuickCheck as I use HUnit because it is often impossible to write meaningful Arbitrary instances (haven't tried SmallCheck yet, but most of the time the problem is with some kind of data type with lots of invariants to be maintained). Creating tests before the implementation + Haskell = Very few bugs + Almost no regressions -- Felipe.

2009/2/14 Peter Verswyvelen
One of the things I liked a lot when working with C# was that as soon as my code compiled, it usually worked after an iteration of two.At least if we forget about the nasty imperative debugging that is needed after a while because of unanticipated and unchecked runtime side effects. After heaving read about Haskell and having written some small programs for the last year or so, I'm now finally writing a bigger program with it. It is not so easy yet since learning a language and trying to reach a deadline at the same time is hard :)
However, it is just amazing that whenever my Haskell program compiles (which to be fair can take a while for an average Haskeller like me ;-), it just... works! I have heard rumors that this was the case, but I can really confirm it.
A bit hurray for strong typing! (and if you don't like it, you can still use Dynamic and Typeable ;-)
I've found the same thing. An interesting observation is that (for me) the vast majority of the type errors are things that would've happened in *any* statically typed language (like C#), but somehow Haskell manages to be a lot better at catching errors at compile time. So my conclusion is that it's not just static typing, it's functional programming in conjunction with static strong type checking. When all you're writing are expressions, then *everything* goes through some level of "sanity checking". When you're writing imperative code, a lot of the meaning of your program comes from the ordering of statements - which usually isn't checked at all (aside from scope). So IMO static typing is good, but it's only with functional programming that it really shines. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862

At Sun, 15 Feb 2009 05:37:12 +0000, Sebastian Sylvan wrote:
So my conclusion is that it's not just static typing, it's functional programming in conjunction with static strong type checking.
Indeed. For example, it's pretty hard to accidentally use an 'uninitialized variable' in Haskell, because variables can only be introduced using the let statement or a lambda expression, which both require that the name be bound to something. And, in languages like C if you write: ---------------------- if (foo) statement1; else statement2; statement3; statement4; ---------------------- You might be mislead into think that statement2 is part of the conditional. In Haskell, if you write: do if foo then do statement1 else do statement2 statement3 statement4 then the visual layout gives you the correct idea. I think the lack of automatic type casting and C++ style name overloading also helps. If you explicitly state what you want done, you are more likely to get what you want than if you let the compiler do it according to some rules that you may or may not remember. I have had the unfortunate experience of adding 1 + 1 and getting 11 in some languages. But, not in Haskell. By using folds and maps, we avoid many off-by-one errors. So, I would agree that it is not just static type checking, but a whole bunch of little things that all add up. - jeremy

2009/2/15 Sebastian Sylvan
2009/2/14 Peter Verswyvelen
One of the things I liked a lot when working with C# was that as soon as my code compiled, it usually worked after an iteration of two.At least if we forget about the nasty imperative debugging that is needed after a while because of unanticipated and unchecked runtime side effects. After heaving read about Haskell and having written some small programs for the last year or so, I'm now finally writing a bigger program with it. It is not so easy yet since learning a language and trying to reach a deadline at the same time is hard :) However, it is just amazing that whenever my Haskell program compiles (which to be fair can take a while for an average Haskeller like me ;-), it just... works! I have heard rumors that this was the case, but I can really confirm it.
A bit hurray for strong typing! (and if you don't like it, you can still use Dynamic and Typeable ;-)
I've found the same thing. An interesting observation is that (for me) the vast majority of the type errors are things that would've happened in *any* statically typed language (like C#), but somehow Haskell manages to be a lot better at catching errors at compile time. So my conclusion is that it's not just static typing, it's functional programming in conjunction with static strong type checking. When all you're writing are expressions, then *everything* goes through some level of "sanity checking". When you're writing imperative code, a lot of the meaning of your program comes from the ordering of statements - which usually isn't checked at all (aside from scope). So IMO static typing is good, but it's only with functional programming that it really shines.
Don't forget Algebraic Data Types. Those seem to also avoid many of the sorts of errors that you would see in OO or struct-based (i.e. C) programming. Has anyone seen any real studies of this phenomenon? There is plenty of anecdotal evidence that Haskell is doing something right to reduce the bugs, but (1) some hard evidence would be nice and (2) its not very clear which features of Haskell most contribute to this. (On that note, IIRC there was a study that correlated bug rates to lines of code *independent* of language (i.e. writing your program in half a many lines or a language that allowed you to express it in half as many lines reduced the number of bugs by half). This is one area that Haskell does well in.) Michael D. Adams mdmkolbe@gmail.com

Hello,
2009/2/15 Michael D. Adams
Has anyone seen any real studies of this phenomenon? There is plenty of anecdotal evidence that Haskell is doing something right to reduce the bugs
Let's just call it a "miracle of FP", write many books and articles on the matter (i.e., generate a lot of hype), strike gold, and get dirty rich. ;-) Cheers, Artyom Shalkhakov.

"Michael D. Adams"
A bit hurray for strong typing!
Don't forget Algebraic Data Types. Those seem to also avoid many of the sorts of errors that you would see in OO or struct-based (i.e. C) programming.
I think the combination of algebraic data types and strong typing is very potent. Good data modeling lets you build data types that encode/model the legal/valid domain for the data in your application. The narrower your data model, the less room for nonsensical programs. Strong typing enforces the limitations in the data model, and prevents programmers from cheating. -k -- If I haven't seen further, it is by standing in the footprints of giants

So IMO static typing is good, but it's only with functional programming that it really shines.
You can go one step further: if you start using dependent types, you'll see that it gets yet harder to get your program to type-check, and once it does, you don't even bother to run it since it's so blindingly obvious that it's correct. Stefan

This must be why there are no good compilers for dependently typed languages. :)
On Sun, Feb 15, 2009 at 9:40 PM, Stefan Monnier
So IMO static typing is good, but it's only with functional programming that it really shines.
You can go one step further: if you start using dependent types, you'll see that it gets yet harder to get your program to type-check, and once it does, you don't even bother to run it since it's so blindingly obvious that it's correct.
Stefan
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

How practical is this dependent types thing? I hear a lot about this from
really clever people who are usually 10 years ahead of their time :)
Actually, back in the eighties when I was an assembly language hacker, I
didn't want to switch to Pascal or C since I found the types in those
languages too weak. C++ changed that with templates, and then I switched
(only to find out that no C++ compiler existed that would not crash on my
fully templated programs ;-).
What I really wanted was a way to program the type checks myself; verify
constraints/assertions at compile time, and if the constraint or assertion
could not be done at compile time, get a warning or an error (or bottom if
the custom type checking program is stuck in an endless loop ;-)
Of course back then I was even more naive than I am now, so those things are
easier said than done I guess.
But if I understand it correctly, dependent types are a bit like that,
values and types can inter-operate somehow?
On Sun, Feb 15, 2009 at 9:40 PM, Stefan Monnier
So IMO static typing is good, but it's only with functional programming that it really shines.
You can go one step further: if you start using dependent types, you'll see that it gets yet harder to get your program to type-check, and once it does, you don't even bother to run it since it's so blindingly obvious that it's correct.
Stefan
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

It's true that you can do "program" the type checker even more if you
have dependent types, but first you should look into what you can do
in Haskell. You can do a lot with type classes.
-- Lennart
2009/2/15 Peter Verswyvelen
How practical is this dependent types thing? I hear a lot about this from really clever people who are usually 10 years ahead of their time :) Actually, back in the eighties when I was an assembly language hacker, I didn't want to switch to Pascal or C since I found the types in those languages too weak. C++ changed that with templates, and then I switched (only to find out that no C++ compiler existed that would not crash on my fully templated programs ;-). What I really wanted was a way to program the type checks myself; verify constraints/assertions at compile time, and if the constraint or assertion could not be done at compile time, get a warning or an error (or bottom if the custom type checking program is stuck in an endless loop ;-) Of course back then I was even more naive than I am now, so those things are easier said than done I guess. But if I understand it correctly, dependent types are a bit like that, values and types can inter-operate somehow? On Sun, Feb 15, 2009 at 9:40 PM, Stefan Monnier
wrote: So IMO static typing is good, but it's only with functional programming that it really shines.
You can go one step further: if you start using dependent types, you'll see that it gets yet harder to get your program to type-check, and once it does, you don't even bother to run it since it's so blindingly obvious that it's correct.
Stefan
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Am Sonntag, 15. Februar 2009 23:00 schrieb Peter Verswyvelen:
But if I understand it correctly, dependent types are a bit like that, values and types can inter-operate somehow?
With dependent types, parameters of types can be values. So you can define a data type List which is parameterized by the length of the list (and the element type): data List :: Nat -> * -> * where -- The kind of List contains a type. Nil :: List 0 el Cons :: el -> List len el -> List (succ len) el And you have functions where the result type can depend on the actual argument: replicate :: {len :: Nat} -> el -> List len el -- We have to name the argument so that we can refer to it. So the type of replicate 0 'X' will be List 0 Char and the type of replicate 5 'X' will be List 5 Char. Dependent typing is very good for things like finding index-out-of-bounds errors and breach of data structure invariants (e.g., search tree balancing) at compile time. But you can even encode complete behavioral specifications into the types. For example, there is the type of all sorting functions. Properly implemented Quicksort and Mergesort functions would be values of this type but the reverse function wouldn’t. Personally, I have also thought a bit about dependently typed FRP where types encode temporal specifications. Dependent types are really interesting. But note that you can simulate them to a large degree in Haskell, although especially dependent functions like replicate above need nasty workarounds. You may want to have a look at Haskell packages like type-level. Best wishes, Wolfgang

Hi Peter,
I'm delighted to hear about your successes with Haskell programming!
I suspect that parametric polymorphism has a lot to do with phenomenon of
works-when-it-compiles. The more polymorphic a signature is, the fewer the
possible type-correct definitions. Luckily, the definition that "works" is
one of the few type-correct ones. As John Reynolds and then Phil Wadler
showed, some useful properties necessarily hold purely as a consequence of
the polymorphic type, regardless of the implementation. (See "Theorems for
free".)
- Conal
2009/2/14 Peter Verswyvelen
One of the things I liked a lot when working with C# was that as soon as my code compiled, it usually worked after an iteration of two.At least if we forget about the nasty imperative debugging that is needed after a while because of unanticipated and unchecked runtime side effects. After heaving read about Haskell and having written some small programs for the last year or so, I'm now finally writing a bigger program with it. It is not so easy yet since learning a language and trying to reach a deadline at the same time is hard :)
However, it is just amazing that whenever my Haskell program compiles (which to be fair can take a while for an average Haskeller like me ;-), it just... works! I have heard rumors that this was the case, but I can really confirm it.
A bit hurray for strong typing! (and if you don't like it, you can still use Dynamic and Typeable ;-)
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
participants (13)
-
Artyom Shalkhakov
-
Conal Elliott
-
Felipe Lessa
-
Jeremy Shaw
-
John Meacham
-
Ketil Malde
-
Lennart Augustsson
-
Michael D. Adams
-
Peter Verswyvelen
-
Rick R
-
Sebastian Sylvan
-
Stefan Monnier
-
Wolfgang Jeltsch