
On Mon, Mar 23, 2009 at 9:51 PM, Brandon S. Allbery KF8NH < allbery@ece.cmu.edu> wrote:
On 2009 Mar 23, at 22:02, Zachary Turner wrote:
Everything I've read has said that it's generally considered good practice to specify the full type of a function before the definition. Why is this? It almost seems to go against the principles of type inference. Why let the compiler infer types if you're just going to tell it what types to use for everything? Ok well, not really for everything, you don't typically specify
1. Specifying the type of a top level binding avoids the monomorphism restriction.
2. Type inference is nice right up until you have to debug a type error; then the error gets reported at the point where the compiler realizes it can't match up the types, which could be somewhere not obviously related (depends on what the call chain looks like). The more concrete types you give the compiler, the better (both more complete and more correctly located) the type errors will be.
Regarding the second issue, this occurs in most other type inferring languages as well, but usually you just specify types up until such time that your function is fully tested and you deem that it's good, then removing the type specification. Or, when you get strange type errors, binding a few relevant values to the types you expect, and then the type errors become clearer. Then once you fix them you can remove the type annotations again. Regarding the first issue, I haven't gotten deep enough into Haskell yet to fully appreciate the monomorphism restriction, although I've run into it once I believe. So I'll probably appreciate that aspect of it more later. That being said, is it as common as I expect it would be that the progarmmer unknowingly specifies a type that is not the most general type possible? I would think this would be "bad", although not the end of the world or anything it would still be desirable to be as generic as possible.