
Manuel M T Chakravarty writes:
Martin Sulzmann:
Manuel M T Chakravarty writes:
Martin Sulzmann:
A problem with ATs at the moment is that some terminating FD programs result into non-terminating AT programs.
Somebody asked how to write the MonadReader class with ATs: http://www.haskell.org//pipermail/haskell-cafe/2006-February/014489.html
This requires an AT extension which may lead to undecidable type inference: http://www.haskell.org//pipermail/haskell-cafe/2006-February/014609.html
The message that you are citing here has two problems:
1. You are using non-standard instances with contexts containing non-variable predicates. (I am not disputing the potential merit of these, but we don't know whether they apply to Haskell' at this point.) 2. You seem to use the super class implication the wrong way around (ie, as if it were an instance implication). See Rule (cls) of Fig 3 of the "Associated Type Synonyms" paper.
I'm not arguing that the conditions in the published AT papers result in programs for which inference is non-terminating.
We're discussing here a possible AT extension for which inference is clearly non-terminating (unless we cut off type inference after n number of steps). Without these extensions you can't adequately encode the MonadReader class with ATs.
This addresses the first point. You didn't address the second. let me re-formuate: I think, you got the derivation wrong. You use the superclass implication the wrong way around. (Or do I misunderstand?)
Superclass implication is reversed when performing type inference. In the same way that instance reduction is reserved. Here's the example again. class C a class F a where type T a instance F [a] where type T [a] = [[[a]]] class C (T a) => D a ^^^^^ type function appears in superclass context instance D [a] => C [[a]] -- satisfies Ross Paterson's Termination Conditions Consider D [a] -->_superclass C (T [a]), D [a] -->_type function C [[[a]]], D [a] -->_instance D [[a]], D [a] and so on Of course, you'll argue that you apply some other inference methods. Though, I'm a bit doubtful whether you'll achieve any reasonably strong completeness of inference results. Martin
This plus the points that I mentioned in my previous two posts in this thread leave me highly unconvinced of your claims comparing AT and FD termination.
As argued earlier by others, we can apply the same 'tricks' to make FD inference terminating (but then we still don't know whether the constraint store is consistent!).
To obtain termination of type class inference, we'll need to be *very* careful that there's no 'devious' interaction between instances and type functions/improvement.
It would be great if you could come up with an improved analysis which rejects potentially non-terminating AT programs. Though, note that I should immediately be able to transfer your results to the FD setting based on the following observation:
I can turn any AT proof into a FD proof by (roughly)
- replace the AT equation F a=b with the FD constraint F a b - instead of firing type functions, we fire type improvement and instances - instead of applying transitivity, we apply the FD rule F a b, F a c ==> b=c
Similarly, we can turn any FD proof into a AT proof.
You may be able to encode AT proofs into FD proofs and vice versa, but that doesn't change the fact that some issues, as for example formation rules, are more natural, and I argue, more intuitive to the programmer in the AT setting. For example, with FDs you have to explain to the programmer the concept of weak coverage. With ATs, you don't need to do this, because the functional notation naturally leads people to write programs that obey weak coverage. Again, we want to write functional programs on the type level - after all FDs are *functional* dependencies - so, why use a relational notation?
If, as you say, FDs and ATs are equivalent on some scale of type theoretic power, let us use a functional notation in a functional language. At least, that is more orthogonal language design.
Manuel