
The hash is obviously calculated on a normalised version of the module.
As part of this normalisation step, all references to external definitions are fully qualified.
And it is impossible to import a variable with a changed type, becauseĀ if the type had changed so would have its definition and therefore the hash of the imported module.
The normalization is actually the important point here. It's easy to come up with something that works reasonably, for some definition of "reasonably". Normalization can be too narrow (throw away relevant information) or too wide (leave in irrelevant information). Problem is, it's the library user that defines what parts of a library are of interest to the use case. Or, more specifically: What parts of a library's *semantics*. Most users care only about specific properties of the library functions they use. E.g. for a function that returns a list, some care about order and some don't. Some stuff may not even be properly expressible, like side-channel data leakage in a crypto hash function, or legal constraints you don't want to care about but have to. More on the programming sides, type bounds may become tighter or less tight. And even a less tight type bound can cause trouble (because your code might have to deal with less tight type bounds on results, for example - or you might be in a situation where you want to allow your own callers to pass in looser-constrained data - or maybe you do not want that because your own code depends on these bounds). And sometimes the type bounds need to be explicit because type inference isn't good enough, so you can't fully leave that to automation. So... hashes are good for checking for equality, but you want to express inequalities, otherwise your clients will have to recheck the full API with every upgrade. (Sure, with Haskell, this is much less of a problem than usual. But it won't go away.)