
On Mon, May 3, 2010 at 11:07 AM, Kyle Murphy
The problem with dynamic typing is that it has a much higher chance of having a subtle error creep into your code that can go undetected for a long period of time. A strong type system forces the code to fail early where it's easier to track down and fix the problem, rather than trying to perform debugging on the fly in a production system. This has an added advantage for compiled languages in that for many non-trivial applications the time to build and deploy a new instance of the program, even in the development environment is often substantial, and the more trivial errors are discovered at compile time, the less time is wasted on builds.
For small code bases the time spent tracking down a error at runtime might be less than the time spent making your code conform to strict type requirements, but for larger code bases the amount of time necessary to track down such errors greatly out weighs the amount of time needed to make your code well typed.
To look at the flip side of your statistics: Compiler says my code is right => My code is actually wrong -- 40% Compiler says my code is wrong => My code is actually right -- 5%
I'd argue this is provably wrong, as correct code by definition would compile.
Here is a contrived example of what I am referring to: prefac f 0 = 1 prefac f n = n * f (n-1) fac = (\x -> x x) (\x -> prefac (x x)) If this code were allowed to compile (say by inserting unsafeCoerce anywhere you like), it would correctly implement a factorial function. It is precisely these cases behind the dynamically typed languages' advocacy: my code is right but I can't (or it is too much work to) convince the compiler of that fact. It is a pretty bold statement to say that these do not occur.
The fact that it doesn't is proof enough that there's a problem with it even if that problem is simply that the types you're using aren't exactly correct. Further, I'd argue that in the first instance with a non-strict type system, the instance of wrong code that compiles would be higher. The only argument to support non-strict typing would be if you could show that it takes less time to track down runtime bugs than it does to fix compile time type errors, and any such claim I'd be highly skeptical of.
Clearly. But many people believe in this methodology, and use test suites and code coverage instead of types. Indeed, such practices are essentially "empirical type checking", and they afford the advantage that their verification is much more expressive (however less reliable) than our static type system, because they may use arbitrary code to express their predicates. What I seem to be getting at is this plane of type systems: Constrained ------------------------- Expressive Unreliable | (C) | (test suites) | (C++) . | . | (Java/C#) (Scala) . | . | . | (Haskell) . | | (Agda) Reliable Where by Constrained/Expressive I mean the ability for the system to express properties *about the code* (so C++'s type system being turing complete is irrelevant). By Unreliable/Reliable I mean, given popular engineering practice in that language, the chance that if it passes the checks then it works as intended. For all the languages, I mean their compilers. Test suites extend down the right-hand side, depending on how rigorous you are about testing, but they never get as far down as Agda. :-) Luke