Why doesn't GHC derive these types

Consider the following program: {-# LANGUAGE TypeFamilyDependencies #-} data D x type family F t = s | s -> t type instance F (D t) = D (F t) f :: F s -> () f _ = () g :: D (F t) -> () g x = f x main = return () The problem seems to be the call from "g" to "f". We're calling "f" with an argument of type "D (F t)". "f" then has to determine what "s" is in it's signature. We know: 1. "F s ~ D (F t)" (from function call) 2. "D (F t) ~ F (D t)" (from the right hand side of the injective type definition) Therefore we should be able to derive: 3. "F s ~ F (D t)" (type equality is transitive) 4. "s ~ D t" (as F is injective) I suspect the part we're missing in GHC is step 4. I recall reading this somewhere but I can't find where now. However, the paper about injective types says that this style of inference, namely "F a ~ F b => a ~ b" should occur. I quote ( https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/injectiv... section 5.1 p125): So, faced with the constraint F α ∼ F β, the inference engine does not in
general unify α := β; so the constraint F α ∼ F β is not solved, and hence f (g 3) will be rejected. But if we knew that F was injective, we can unify α := β without guessing.
Improvement (a term due to Mark Jones (Jones 1995, 2000)) is a process that adds extra "derived" equality constraints that may make some extra unifications apparent, thus allowing inference to proceed further without having to make guesses. In the case of an injective F, improvement adds α ∼ β, which the constraint solver can solve by unification. In general, improvement of wanted constraint is extremely simple:
Definition 11 (Wanted improvement). Given the wanted constraint F σ ∼ F τ , add the derived wanted constraint σn ∼ τn for each n-injective argument of F.
Why is this OK? Because if it is possible to prove the original constraint F σ ∼ F τ , then (by Definition 1) we will also have a proof of σn ∼ τn. So adding σn ∼ τn as a new wanted constraint does not constrain the solution space. Why is it beneficial? Because, as we have seen, it may expose additional guess-free unification opportunities that that solver can exploit.
Am I correct in my assessment of what is happening here with GHC? Is there anyway to get it to compile this program, perhaps with an extension?

Also is there a type-checker plugin that helps with this? If not, would it
be possible to write one, or was there some intentional reason why this
inference was not included?
On Sat, Oct 29, 2016 at 11:54 AM, Clinton Mead
Consider the following program:
{-# LANGUAGE TypeFamilyDependencies #-}
data D x
type family F t = s | s -> t type instance F (D t) = D (F t)
f :: F s -> () f _ = ()
g :: D (F t) -> () g x = f x
main = return ()
The problem seems to be the call from "g" to "f". We're calling "f" with an argument of type "D (F t)". "f" then has to determine what "s" is in it's signature. We know:
1. "F s ~ D (F t)" (from function call) 2. "D (F t) ~ F (D t)" (from the right hand side of the injective type definition)
Therefore we should be able to derive:
3. "F s ~ F (D t)" (type equality is transitive) 4. "s ~ D t" (as F is injective)
I suspect the part we're missing in GHC is step 4. I recall reading this somewhere but I can't find where now.
However, the paper about injective types says that this style of inference, namely "F a ~ F b => a ~ b" should occur. I quote ( https://www.microsoft.com/en-us/research/wp-content/ uploads/2016/07/injective-type-families-acm.pdf section 5.1 p125):
So, faced with the constraint F α ∼ F β, the inference engine does not in
general unify α := β; so the constraint F α ∼ F β is not solved, and hence f (g 3) will be rejected. But if we knew that F was injective, we can unify α := β without guessing.
Improvement (a term due to Mark Jones (Jones 1995, 2000)) is a process that adds extra "derived" equality constraints that may make some extra unifications apparent, thus allowing inference to proceed further without having to make guesses. In the case of an injective F, improvement adds α ∼ β, which the constraint solver can solve by unification. In general, improvement of wanted constraint is extremely simple:
Definition 11 (Wanted improvement). Given the wanted constraint F σ ∼ F τ , add the derived wanted constraint σn ∼ τn for each n-injective argument of F.
Why is this OK? Because if it is possible to prove the original constraint F σ ∼ F τ , then (by Definition 1) we will also have a proof of σn ∼ τn. So adding σn ∼ τn as a new wanted constraint does not constrain the solution space. Why is it beneficial? Because, as we have seen, it may expose additional guess-free unification opportunities that that solver can exploit.
Am I correct in my assessment of what is happening here with GHC? Is there anyway to get it to compile this program, perhaps with an extension?

Sorry for talking to myself, but I think the answer to my question is here:
https://ghc.haskell.org/trac/ghc/ticket/10833
On Sat, Oct 29, 2016 at 12:26 PM, Clinton Mead
Also is there a type-checker plugin that helps with this? If not, would it be possible to write one, or was there some intentional reason why this inference was not included?
On Sat, Oct 29, 2016 at 11:54 AM, Clinton Mead
wrote: Consider the following program:
{-# LANGUAGE TypeFamilyDependencies #-}
data D x
type family F t = s | s -> t type instance F (D t) = D (F t)
f :: F s -> () f _ = ()
g :: D (F t) -> () g x = f x
main = return ()
The problem seems to be the call from "g" to "f". We're calling "f" with an argument of type "D (F t)". "f" then has to determine what "s" is in it's signature. We know:
1. "F s ~ D (F t)" (from function call) 2. "D (F t) ~ F (D t)" (from the right hand side of the injective type definition)
Therefore we should be able to derive:
3. "F s ~ F (D t)" (type equality is transitive) 4. "s ~ D t" (as F is injective)
I suspect the part we're missing in GHC is step 4. I recall reading this somewhere but I can't find where now.
However, the paper about injective types says that this style of inference, namely "F a ~ F b => a ~ b" should occur. I quote ( https://www.microsoft.com/en-us/research/wp-content/uploads /2016/07/injective-type-families-acm.pdf section 5.1 p125):
So, faced with the constraint F α ∼ F β, the inference engine does not in
general unify α := β; so the constraint F α ∼ F β is not solved, and hence f (g 3) will be rejected. But if we knew that F was injective, we can unify α := β without guessing.
Improvement (a term due to Mark Jones (Jones 1995, 2000)) is a process that adds extra "derived" equality constraints that may make some extra unifications apparent, thus allowing inference to proceed further without having to make guesses. In the case of an injective F, improvement adds α ∼ β, which the constraint solver can solve by unification. In general, improvement of wanted constraint is extremely simple:
Definition 11 (Wanted improvement). Given the wanted constraint F σ ∼ F τ , add the derived wanted constraint σn ∼ τn for each n-injective argument of F.
Why is this OK? Because if it is possible to prove the original constraint F σ ∼ F τ , then (by Definition 1) we will also have a proof of σn ∼ τn. So adding σn ∼ τn as a new wanted constraint does not constrain the solution space. Why is it beneficial? Because, as we have seen, it may expose additional guess-free unification opportunities that that solver can exploit.
Am I correct in my assessment of what is happening here with GHC? Is there anyway to get it to compile this program, perhaps with an extension?

Your original example looks like a new bug to me. #10833 is about givens (that is, a context that would be specified in a function type signature), and I don't think that's at issue here. I agree with you that your program should be accepted. Might you post a ticket? Thanks, Richard
On Oct 28, 2016, at 10:30 PM, Clinton Mead
wrote: Sorry for talking to myself, but I think the answer to my question is here:
https://ghc.haskell.org/trac/ghc/ticket/10833 https://ghc.haskell.org/trac/ghc/ticket/10833
On Sat, Oct 29, 2016 at 12:26 PM, Clinton Mead
mailto:clintonmead@gmail.com> wrote: Also is there a type-checker plugin that helps with this? If not, would it be possible to write one, or was there some intentional reason why this inference was not included? On Sat, Oct 29, 2016 at 11:54 AM, Clinton Mead
mailto:clintonmead@gmail.com> wrote: Consider the following program: {-# LANGUAGE TypeFamilyDependencies #-}
data D x
type family F t = s | s -> t type instance F (D t) = D (F t)
f :: F s -> () f _ = ()
g :: D (F t) -> () g x = f x
main = return ()
The problem seems to be the call from "g" to "f". We're calling "f" with an argument of type "D (F t)". "f" then has to determine what "s" is in it's signature. We know:
1. "F s ~ D (F t)" (from function call) 2. "D (F t) ~ F (D t)" (from the right hand side of the injective type definition)
Therefore we should be able to derive:
3. "F s ~ F (D t)" (type equality is transitive) 4. "s ~ D t" (as F is injective)
I suspect the part we're missing in GHC is step 4. I recall reading this somewhere but I can't find where now.
However, the paper about injective types says that this style of inference, namely "F a ~ F b => a ~ b" should occur. I quote (https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/injectiv... https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/injectiv... section 5.1 p125):
So, faced with the constraint F α ∼ F β, the inference engine does not in general unify α := β; so the constraint F α ∼ F β is not solved, and hence f (g 3) will be rejected. But if we knew that F was injective, we can unify α := β without guessing.
Improvement (a term due to Mark Jones (Jones 1995, 2000)) is a process that adds extra "derived" equality constraints that may make some extra unifications apparent, thus allowing inference to proceed further without having to make guesses. In the case of an injective F, improvement adds α ∼ β, which the constraint solver can solve by unification. In general, improvement of wanted constraint is extremely simple:
Definition 11 (Wanted improvement). Given the wanted constraint F σ ∼ F τ , add the derived wanted constraint σn ∼ τn for each n-injective argument of F.
Why is this OK? Because if it is possible to prove the original constraint F σ ∼ F τ , then (by Definition 1) we will also have a proof of σn ∼ τn. So adding σn ∼ τn as a new wanted constraint does not constrain the solution space. Why is it beneficial? Because, as we have seen, it may expose additional guess-free unification opportunities that that solver can exploit.
Am I correct in my assessment of what is happening here with GHC? Is there anyway to get it to compile this program, perhaps with an extension?
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
participants (2)
-
Clinton Mead
-
Richard Eisenberg