
"Sebastian Sylvan"
Obviously no simple measure is going to satisfy everyone, but I think the gzip measure is more even handed across a range of languages. It probably more closely aproximates the amount of mental effort [..]
I'm not sure I follow that reasoning? At any rate, I think the ICFP contest is much better as a measure of productivity. But, just like for performance, LOC for the shootout can be used as a micro-benchmark.
Personally I think syntactic noise is highly distracting, and semantic noise is even worse!
This is important - productivity doesn't depend so much on the actual typing, but the ease of refactoring, identifying and fixing bugs, i.e *reading* code. Verbosity means noise, and also lower information content in a screenful of code. I think there were some (Erlang?) papers where they showed a correlation between program size (in LOC), time of development, and possibly number of bugs?) - regardless of language.
Token count would be good, but then we'd need a parser for each language, which is quite a bit of work to do...
Whatever you do, it'll be an approximation, so why not 'wc -w'? With 'wc -c' for J etc where programs can be written as spaceless sequences of symbols. Or just average chars, words and lines? -k -- If I haven't seen further, it is by standing in the footprints of giants