
Thanks for your answer and your examples. It would be worthwile collecting such examples in a small article. I think some of the problems with type inference come from insufficient theory about metavariables. When I started studying higher-order unification I always wondered what the scope of a metavariable is. It is created at some point and then just sits there waiting to be solved by a constraint. However, it is not clear where these constraints may come from and at what time a metavariable should be declared unsolved or be generalized over. In the HM toy calculus (lambda + let), it is spelled out: generalization must happen after a let, such that the solution for a metavariable is determined solely by constraints in the *definition* of a symbol, not in its *use*. I'd be grateful for pointers to work that considers metavariable scope in some bigger, more realistic calculi. In Haskell, one thing that goes wrong is that all definitions in a module are considered mutually recursive by the type inference algorithm. But in fact, the definitions can be stratified by a strongly connected components analysis. Doing such an analysis, and limiting metavariable scope to an SCC, would overcome the problem I outlined in my original mail and would result in a more principled treatment of metavariables. That concerned type inference for global definitions. It is not clear we want the same for local definitions. There, it makes more sense to assume that the programmer can overlook what he is doing, and thus one could allow flow of information from the use of a symbol to its definition. Do we want information flow from dead code to live code? I'd say no. "Dead" is unfortunately undecidable; but at least one can identify unused local definitions, i.e., statically unreachable definitions. Your examples seem to suggest you want such information flow, or maybe not? On 21.07.2012 12:26, oleg@okmij.org wrote:
However, if your are using ExtendedDefaultRules then you are likely to know you are leaving the clean sound world of type inference.
First of all, ExtendedDefaultRules is enabled by default in GHCi. Second, my example will work without ExtendedDefaultRules, in pure Haskell98. It is even shorter:
instance Num Char main = do x <- return [] let y = x print . fst $ (x, abs $ head x) -- let dead = if False then y == "" else True return () The printed result is either [] or "".
Mainly, if the point is to demonstrate the non-compositionality of type inference and the effect of the dead code, one can give many many examples, in Haskell98 or even in SML.
Here is a short one (which does not relies on defaulting. It uses ExistentialQuantification, which I think is in the new standard or is about to be.).
{-# LANGUAGE ExistentialQuantification #-}
data Foo = forall a. Show a => Foo [a] main = do x <- return [] let z = Foo x let dead = if False then x == "" else True return ()
The code type checks. If you _remove_ the dead code, it won't. As you can see, the dead can have profound, and beneficial influence on alive, constraining them. (I guess this example is well-timed for Obon).
Ah, I have never been in Japan at that time.
For another example, take type classes. Haskell98 prohibits overlapping of instances. Checking for overlapping requires the global analysis of the whole program and is clearly non-compositional. Whether you may define instance Num (Int,Int) depends on whether somebody else, in a library you use indirectly, has already introduced that instance. Perhaps that library is imported for a function that has nothing to do with treating a pair of Ints as a Num -- that is, the instance is a dead code for your program. Nevertheless, instances are always imported, implicitly.
Yes, that's a known problem.
The non-compositionality of type inference occurs even in SML (or other language with value restriction). For example,
let x = ref [];;
(* let z = if false then x := [1] else ();; *)
x := [true];;
This code type checks. If we uncomment the dead definition, it won't. So, the type of x cannot be fully determined from its definition; we need to know the context of its use -- which is precisely what seems to upset you about Haskell.
To stirr action, mails on haskell-cafe seem useless.
What made you think that? Your questions weren't well answered? What other venue would you propose?
Yes, they were. But for discussion about metavariables maybe something like a haskell implementor's forum would be more appropriate. Yet I am only an Agda implementor. Cheers, Andreas -- Andreas Abel <>< Du bist der geliebte Mensch. Theoretical Computer Science, University of Munich Oettingenstr. 67, D-80538 Munich, GERMANY andreas.abel@ifi.lmu.de http://www2.tcs.ifi.lmu.de/~abel/