Field accessor type inference woes

It strikes me that there is a fairly major issue with the record proposal as it stands. Right now the class class Has (r :: *) (x :: Symbol) (t :: *) can be viewed as morally equivalent to having several classes class Foo a b where foo :: a -> b class Bar a b where bar :: a -> b for different fields foo, bar, etc. I'll proceed with those instead because it makes it easier to show the issue today. When we go to compose those field accessors, you very quickly run into a problem due to a lack of functional dependencies: When you go to define fooBar = foo.bar which is perfectly cromulent with existing record field accessors you get a big problem. fooBar :: (Foo b c, Bar a b) => a -> c The b that should occur in the signature isn't on the right hand side, and isn't being forced by any fundeps, so fooBar simply can't be written. Could not deduce (Foo b0 c) arising from a use of `foo' from the context (Bar a b, Foo b c) If you leave off the signature you'll get an ambiguity check error: Could not deduce (Foo b0 c) arising from the ambiguity check for `fooBar' from the context (Bar a b, Foo b c) bound by the inferred type for `fooBar': (Bar a b, Foo b c) => a -> c It strikes me that punting all functional dependencies in the record types to the use of equality constraints has proven insufficient to tackle this problem. You may be able to bandaid over it by including a functional dependency/type family class Has (r :: *) (x :: Symbol) where type Got r x :: * getFld :: r -> Got r x (or to avoid the need for type applications in the first place!) class Has (r :: *) (x :: Symbol) where type Got r x :: * getFld :: p x -> r -> Got r x This has some annoying consequences though. Type inference can still only flow one way through it, unlike the existing record accessors, and it would cost the ability to 'cheat' and still define 'Has' for universally quantified fields that we might have been able to do with the proposal as it stands. -Edward

Hi Edward, I was envisaging that we might well need a functional dependency class Has (r :: *) (x :: Symbol) (t :: *) | r x -> t and, as you point out, composition of polymorphic accessors certainly motivates this. Isn't that enough, though? I think it works out more neatly than the type family version, not least because evidence for a Has constraint is still merely a projection function, and we can still handle universally quantified fields. Obviously it will still not allow determining the type of a record from the type of one of its fields, but that doesn't seem unreasonable to me. Have you any examples where this will be problematic? Moreover, I suggest that Has constraints are only introduced when there are multiple fields with the same name in scope, so existing code (with no ambiguity) will work fine. Thanks, Adam On 01/07/13 15:48, Edward Kmett wrote:
It strikes me that there is a fairly major issue with the record proposal as it stands.
Right now the class
class Has (r :: *) (x :: Symbol) (t :: *)
can be viewed as morally equivalent to having several classes
class Foo a b where foo :: a -> b
class Bar a b where bar :: a -> b
for different fields foo, bar, etc.
I'll proceed with those instead because it makes it easier to show the issue today.
When we go to compose those field accessors, you very quickly run into a problem due to a lack of functional dependencies:
When you go to define
fooBar = foo.bar
which is perfectly cromulent with existing record field accessors you get a big problem.
fooBar :: (Foo b c, Bar a b) => a -> c
The b that should occur in the signature isn't on the right hand side, and isn't being forced by any fundeps, so fooBar simply can't be written.
Could not deduce (Foo b0 c) arising from a use of `foo' from the context (Bar a b, Foo b c)
If you leave off the signature you'll get an ambiguity check error:
Could not deduce (Foo b0 c) arising from the ambiguity check for `fooBar' from the context (Bar a b, Foo b c) bound by the inferred type for `fooBar': (Bar a b, Foo b c) => a -> c
It strikes me that punting all functional dependencies in the record types to the use of equality constraints has proven insufficient to tackle this problem. You may be able to bandaid over it by including a functional dependency/type family
class Has (r :: *) (x :: Symbol) where type Got r x :: * getFld :: r -> Got r x
(or to avoid the need for type applications in the first place!)
class Has (r :: *) (x :: Symbol) where type Got r x :: * getFld :: p x -> r -> Got r x
This has some annoying consequences though. Type inference can still only flow one way through it, unlike the existing record accessors, and it would cost the ability to 'cheat' and still define 'Has' for universally quantified fields that we might have been able to do with the proposal as it stands.
-Edward

On Mon, Jul 1, 2013 at 1:07 PM, Adam Gundry
Hi Edward,
I was envisaging that we might well need a functional dependency
class Has (r :: *) (x :: Symbol) (t :: *) | r x -> t
and, as you point out, composition of polymorphic accessors certainly motivates this. Isn't that enough, though? I think it works out more neatly than the type family version, not least because evidence for a Has constraint is still merely a projection function, and we can still handle universally quantified fields.
My understanding from a recent interaction with Iavor was that the old difference between functional dependencies and type families where the fundep only chose the 'instance' rather than the actual meaning of the arguments has changed recently, to make the two approaches basically indistinguishable. This happened as part of the resolution to http://hackage.haskell.org/trac/ghc/ticket/2247 as I understand it. In particular this broke similar code that relied on this functionality in lens and forced a rather large number of patches that had made (ab)use of the old fundep behavior to be reverted in lens. Consequently, I don't think you'll find much of a difference between the type family and the functional depency, except that the latter is forced to infect more type signatures. Obviously it will still not allow determining the type of a record from
the type of one of its fields, but that doesn't seem unreasonable to me. Have you any examples where this will be problematic?
Well, it does have the unfortunate consequence that it dooms the lifted instance we talked about that could make all field accessors automatically lift into lenses, as that required inference to flow backwards from the 'field' to the 'record'. Moreover, I suggest that Has constraints are only introduced when there
are multiple fields with the same name in scope, so existing code (with no ambiguity) will work fine.
The awkward part about that is that it becomes impossible to write code that is polymorphic and have it get the more general signature without putting dummies in scope just to force conflict. -Edward
Thanks,
Adam
On 01/07/13 15:48, Edward Kmett wrote:
It strikes me that there is a fairly major issue with the record proposal as it stands.
Right now the class
class Has (r :: *) (x :: Symbol) (t :: *)
can be viewed as morally equivalent to having several classes
class Foo a b where foo :: a -> b
class Bar a b where bar :: a -> b
for different fields foo, bar, etc.
I'll proceed with those instead because it makes it easier to show the issue today.
When we go to compose those field accessors, you very quickly run into a problem due to a lack of functional dependencies:
When you go to define
fooBar = foo.bar
which is perfectly cromulent with existing record field accessors you get a big problem.
fooBar :: (Foo b c, Bar a b) => a -> c
The b that should occur in the signature isn't on the right hand side, and isn't being forced by any fundeps, so fooBar simply can't be written.
Could not deduce (Foo b0 c) arising from a use of `foo' from the context (Bar a b, Foo b c)
If you leave off the signature you'll get an ambiguity check error:
Could not deduce (Foo b0 c) arising from the ambiguity check for `fooBar' from the context (Bar a b, Foo b c) bound by the inferred type for `fooBar': (Bar a b, Foo b c) => a -> c
It strikes me that punting all functional dependencies in the record types to the use of equality constraints has proven insufficient to tackle this problem. You may be able to bandaid over it by including a functional dependency/type family
class Has (r :: *) (x :: Symbol) where type Got r x :: * getFld :: r -> Got r x
(or to avoid the need for type applications in the first place!)
class Has (r :: *) (x :: Symbol) where type Got r x :: * getFld :: p x -> r -> Got r x
This has some annoying consequences though. Type inference can still only flow one way through it, unlike the existing record accessors, and it would cost the ability to 'cheat' and still define 'Has' for universally quantified fields that we might have been able to do with the proposal as it stands.
-Edward

Edward, is quite right. Thank you for pointing this out; I hadn’t fully absorbed the consequences of the three-parameter Has class. This isn’t a fundep-vs-type-function thing; it’s a tradeoff between polymorphism and overloading.
There seem to be two alternatives. I’ll use fundep notation just because it’s better known. Same things happen with functions. Here are two records
data R a = MkR { foo :: a -> a }
data S = MkS { bar :: forall b. b -> b }
Here is Plan A: use fundep (or type function)
class Has r f t | r f -> t where
getFld :: r -> t
instance Has (R a) “foo” (a -> a) where ..
instance Has S “bar” (forall b. b -> b) where ...
Lacking (as we still do) impredicative polymorphism, the S instance declaration is rejected.
Here is Plan B: no fundep (or type functions)
class Has r f t where
getFld :: r -> t
instance (t ~ a->a) => Has (R a) “foo” t where ..
instance (t ~ b->b) => Has S “bar” t where ...
Now the instance for S works fine. But the ambiguity problem that Edward describes shows up.
Can you combine A and B?
class Has r f t | r f -> t where
getFld :: r -> t
instance (t ~ b->b) => Has S “bar” t where ...
No: the instance is rejected because the S does not “cover” the free type variable t. This is #2247 I think. There is a good reason for this (I could elaborate if reqd).
Notice too that with
data T = MkT { wib :: (forall b. b -> b) -> Int }
the polymorphic type is more deeply buried, and Plan B doesn’t work either. So Plan B is already a bit of a sticking plaster.
Bottom line: there is a real tension here.
Let’s review the goal:
to allow you to use the same field name in different records.
The current proposal allows you to write
f :: r { foo :: Int } => r -> Int -> Int
which will work on any record with an Int-valued foo field. BUT writing functions like this was never a goal! We could restrict the proposal: we could simply *not abstract* over Has constraints, preventing you from writing ‘f’, but also preventing you from falling over Edward’s problem. (But it would also be an odd restriction, given that Has is in most ways an ordinary class.)
So I think Plan B (the one we are currently proposing) works just fine for the declared goal. We just have to acknowledge that it doesn’t do everything you might possibly want.
Simon
From: Edward Kmett [mailto:ekmett@gmail.com]
Sent: 01 July 2013 18:21
To: Adam Gundry
Cc: Simon Peyton-Jones; glasgow-haskell-users@haskell.org
Subject: Re: Field accessor type inference woes
On Mon, Jul 1, 2013 at 1:07 PM, Adam Gundry
It strikes me that there is a fairly major issue with the record proposal as it stands.
Right now the class
class Has (r :: *) (x :: Symbol) (t :: *)
can be viewed as morally equivalent to having several classes
class Foo a b where foo :: a -> b
class Bar a b where bar :: a -> b
for different fields foo, bar, etc.
I'll proceed with those instead because it makes it easier to show the issue today.
When we go to compose those field accessors, you very quickly run into a problem due to a lack of functional dependencies:
When you go to define
fooBar = foo.bar
which is perfectly cromulent with existing record field accessors you get a big problem.
fooBar :: (Foo b c, Bar a b) => a -> c
The b that should occur in the signature isn't on the right hand side, and isn't being forced by any fundeps, so fooBar simply can't be written.
Could not deduce (Foo b0 c) arising from a use of `foo' from the context (Bar a b, Foo b c)
If you leave off the signature you'll get an ambiguity check error:
Could not deduce (Foo b0 c) arising from the ambiguity check for `fooBar' from the context (Bar a b, Foo b c) bound by the inferred type for `fooBar': (Bar a b, Foo b c) => a -> c
It strikes me that punting all functional dependencies in the record types to the use of equality constraints has proven insufficient to tackle this problem. You may be able to bandaid over it by including a functional dependency/type family
class Has (r :: *) (x :: Symbol) where type Got r x :: * getFld :: r -> Got r x
(or to avoid the need for type applications in the first place!)
class Has (r :: *) (x :: Symbol) where type Got r x :: * getFld :: p x -> r -> Got r x
This has some annoying consequences though. Type inference can still only flow one way through it, unlike the existing record accessors, and it would cost the ability to 'cheat' and still define 'Has' for universally quantified fields that we might have been able to do with the proposal as it stands.
-Edward

Simon Peyton-Jones
writes: ...; it’s a tradeoff between polymorphism and overloading.
Hah! my post crossed with Simon's. This time I'll be succinct. There's **three** alternatives. ...
data R a = MkR { foo :: a -> a } data S = MkS { bar :: forall b. b -> b }
Try Plan C: use a cleverer (associated) type function class Has r f t where type GetResult r f t :: * -- ?? default to t getFld :: r -> GetResult r f t instance (t ~ a->a) => Has (R a) “foo” t where type GetResult (R a) "foo" t = a -> a -- ?? ignore t getFld ... instance (t ~ b->b) => Has S “bar” t where type GetResult S "bar" t = t -- 'improved' t getFld ... In the 'chained' accessors that Edward raises, I think the presence of the type function 'fools' type inference into thinking there's a dependency. So (foo . bar) has type (and abusing notation): ( Has r "bar" t_bar, Has (GetResult r "bar" t_bar) "foo" t_foo ) => r -> (GetResult (GetResult r "bar" t_bar) "foo" t_foo)

Unfortunately, I don't think we get away with this... On 02/07/13 11:45, AntC wrote:
There's **three** alternatives. ...
data R a = MkR { foo :: a -> a } data S = MkS { bar :: forall b. b -> b }
Try Plan C: use a cleverer (associated) type function
class Has r f t where type GetResult r f t :: * -- ?? default to t getFld :: r -> GetResult r f t
instance (t ~ a->a) => Has (R a) “foo” t where type GetResult (R a) "foo" t = a -> a -- ?? ignore t getFld ... instance (t ~ b->b) => Has S “bar” t where type GetResult S "bar" t = t -- 'improved' t getFld ...
In the 'chained' accessors that Edward raises, I think the presence of the type function 'fools' type inference into thinking there's a dependency.
So (foo . bar) has type (and abusing notation):
( Has r "bar" t_bar, Has (GetResult r "bar" t_bar) "foo" t_foo ) => r -> (GetResult (GetResult r "bar" t_bar) "foo" t_foo)
GetResult isn't necessarily injective (and GHC couldn't tell if it was) so there is no way to determine `t` from `GetResult r f t`. I tried prototyping this, and it leads to errors like Couldn't match type `GetResult (GetResult a "bar" t1) "foo" t0' with `GetResult (GetResult a "bar" t2) "foo" t' NB: `GetResult' is a type function, and may not be injective The type variables `t0', `t1' are ambiguous Possible fix: add a type signature that fixes these type variable(s) Expected type: a -> GetResult (GetResult a "bar" t2) "foo" t Actual type: a -> GetResult (GetResult a "bar" t1) "foo" t0 I agree with Simon that we should stick with the current plan, and accept the fact that composition of polymorphic record selectors isn't going to work, though this does make me sad. I'm not sure about restricting quantification over the Has class, because some uses are perfectly fine (aliasing a selector, for example). Adam -- The University of Strathclyde is a charitable body, registered in Scotland, with registration number SC015263.

On 02/07/13 10:57, Simon Peyton-Jones wrote:
Here is Plan A: use fundep (or type function)
class Has r f t | r f -> t where getFld :: r -> t
instance Has (R a) “foo” (a -> a) where .. instance Has S “bar” (forall b. b -> b) where ...
Lacking (as we still do) impredicative polymorphism, the S instance declaration is rejected.
How common are such polymorphic fields in practice? You sometimes see them in newtype wrappers and the like. But I think those are not cases where you want overlapping names anyway. So: why not use a Plan A style class, except for polymorphic fields? Perhaps you could still have a (less polymorphic) class for bar above, class HasBar r where getBar :: r -> b -> b Twan

Simon Peyton-Jones
writes: Edward, is quite right. Thank you for pointing this out; I hadn’t fully absorbed the consequences of the three-parameter Has class. ...
Thank you Simon, Edward, Adam. There is quite a complex design space (and I apologise for trailing a red herring earlier). Simon/Adam's Plan is (in effect) favouring higher-ranked polymorphism over record nesting/chaining. H-R fields are a limitation because we can't update them either. So I think it's a fair question whether supporting h-r polymorphism is worth the limitations? Edward pointed out earlier, circumstances where lenses:
required inference to flow backwards from the 'field' to the 'record'
So that 'feature' of H98 records we've deliberately abandoned, because:
Let’s review the goal: to allow you to use the same field name in different records.
Therefore we can't have the field determining the record type. We've also abandoned having the field (alone) determine the field's type. I can see a genuine use case for (for example) personId is always Int, irrespective of the record type in which personId appears. (This is a 'silent' feature of H98, because the field can appear in only one record type, so it's moot whether it's the field or field+record determining the field's type.) This would be a strong motivation for overloaded fields refraining from generating the fully polymorphic field selector functions. That is, set -XNoRecordSelectorFunctions, then I could declare: personId :: r { personId :: Int } => r -> Int personId = getFld Or perhaps there could be some way to declare that a given field is always at a specific type? Does this help with Edward's chained/nested records example? Does the field name in the outer record determine the type of the inner? (I guess we're in trouble if the inner is (parametric) polymorphic?)

The two points in AntC's message suggest a possible compromise solution. Unless I've missed something, this allows nested fields, fixed type projections, and HR fields. The only restriction is that HR fields must be fixed type, though they can still be used in multiple records. 1. Use an associated type class for Has, to solve the nesting problem: class Has (r :: *) (x :: Symbol) where type GetField r x :: * getField :: r -> GetField r x (Maybe a fundep would also work, but I'm more used to thinking with associated types.) 2. Introduce a declaration for fixed type fields: field bar :: Bar is translated as: class Has_bar r where bar :: r -> Bar instance Has_bar r => Has r "bar" where GetType r "bar" = Bar getField = bar 3. Undeclared fields and those declared typeless don't have their own class: field bar is translated as bar :: Has r "bar" => r -> GetType r "bar" bar = getField 4. Now you can use HR fields, provided you declare them first: field bar :: forall b. b -> b is translated as: class Has_bar r where bar :: r -> forall b. b -> b instance Has_bar r => Has r "bar" where GetType r "bar" = forall b. b -> b getField = bar which doesn't look impredicative to me. Does this work, or have I missed something? Barney.

On 04/07/13 12:27, AntC wrote:
H-R fields are a limitation because we can't update them either. So I think it's a fair question whether supporting h-r polymorphism is worth the limitations?
Yes, higher rank polymorphism is bound to cause trouble with polymorphic projections, and perhaps it won't matter if we limit ourselves to one or the other.
Edward pointed out earlier, circumstances where lenses:
required inference to flow backwards from the 'field' to the 'record'
Crucially, Edward pointed out that this is needed to make polymorphic record fields into lenses automatically [1], which I'm quite keen on. So that's an additional reason for sticking with the current story.
This would be a strong motivation for overloaded fields refraining from generating the fully polymorphic field selector functions. That is, set -XNoRecordSelectorFunctions, then I could declare:
personId :: r { personId :: Int } => r -> Int personId = getFld
Or perhaps there could be some way to declare that a given field is always at a specific type?
Does this help with Edward's chained/nested records example? Does the field name in the outer record determine the type of the inner? (I guess we're in trouble if the inner is (parametric) polymorphic?)
In some circumstances, it might well remove ambiguity if we knew that a field always had a consistent type. I wonder how to declare this. If we were using the type family version then the user could declare type instance GetResult r "personId" = Int independently of any data declarations. Alas... On 04/07/13 13:47, Barney Hilken wrote: | The two points in AntC's message suggest a possible compromise solution. Unless I've missed something, | this allows nested fields, fixed type projections, and HR fields. The only restriction is that HR fields must be | fixed type, though they can still be used in multiple records. | [...] | class Has_bar r where | bar :: r -> forall b. b -> b | | instance Has_bar r => Has r "bar" where | GetType r "bar" = forall b. b -> b | getField = bar | | which doesn't look impredicative to me. | | Does this work, or have I missed something? ...this associated type declaration isn't legal, because type families are not allowed to return polymorphic type schemes. (It's far from clear how one would do type inference for such monsters.) Adam [1] http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Plan...

Adam Gundry
writes: On 04/07/13 12:27, AntC wrote: H-R fields are a limitation because we can't update them either. So I think it's a fair question whether supporting h-r polymorphism is worth the limitations?
Yes, higher rank polymorphism is bound to cause trouble with polymorphic projections, and perhaps it won't matter if we limit ourselves to one or the other.
So with h-r fields, let's stratify the requirements: * The current Plan tries to support holding h-r fields in a way backwards-compatible with H98 records. Why? We know that OverloadedFields are going to break some stuff. It's a question of which stuff is more important to not break. * I think the real requirement is to hold an h-r value in a record, accessible by field name. Consider TPDORF does this: (see example based on one in SPJ's SORF http://hackage.haskell.org/trac/ghc/wiki/Records/TypePunningDeclaredOverloa dedRecordFields#Implementation:theHasclasswithmethodsgetandsetandpunnedType s at 'Higher-Ranked polymorphic fields' ) -- must wrap h-r values in a newtype to put them in a record. newtype Rev = Rev (forall a. [a] -> [a]) data HR = HR { rev :: Rev } -- Has class takes 2 args, with type family for GetResult type instance GetResult r "rev" = Rev instance Has HR "rev" where getFld HR{ rev = (Rev x) } = Rev x -- can't unwrap here, 'cos can't spec Polymorphic Then user code must unwrap the newtype at point of applying. I think this approach also allows update for h-r values (providing they're wrapped) -- but I must admit I rather ran out of steam with the prototype. TPDORF also supported type-changing update for parametric polymorphic fields -- but with limitations. To get round those you would have to revert to H98 record update -- just as the current Plan. So I'm tending to the conclusion that cunning though it is to use "a functional-dependency-like mechanism (but using equalities) for the result type", that is actually giving too much of a headache. **bikeshed: I do like the proposed sugar for constraints (r { f :: t }) => ... But how does that play if `Has` only needs 2 args? AntC

I was envisaging that we might well need a functional dependency
Hi Adam, Edward, (Simon), I think we should be really careful before introducing FunDeps (or type functions). Can we get to the needed type inference with UndecidableInstances? Is that so bad? In the original SORF proposal, Simon deliberately avoided type functions, and for closely argued reasons: "But this approach fails for fields with higher rank types." I guess the same would apply for FunDeps. FWIW in the DORF prototype, I did use type functions. I was trying to cover a panoply of HR types, parametric polymorphic records, type-changing update [**], and all sorts; so probably too big a scope for this project. If you're interested, it's deep in the bowels of the Implementation notes, so I could forgive anybody for tl;dr. See: http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFi elds/ImplementorsView#Type-changingupdate In terms of the current Plan: class Has r fld t where getFld :: r -> GetResult r fld t Of course where the record and field do determine the result, the GetResult instance can simply ignore its third argument. But for HR types, this allows the `Has` instance to 'improve' `t` through Eq constraints, _and_then_ pass that to GetResult. In the 'chained' accessors that Edward raises, I think the presence of the type function 'fools' type inference into thinking there's a dependency. So (foo . bar) has type (and abusing notation): ( Has r "bar" t_bar, Has (GetResult r "bar" t_bar) "foo" t_foo ) => r -> (GetResult (GetResult r "bar" t_bar) "foo" t_foo) [**] Beware that the DORF approach didn't support type-changing update in all cases, for reasons included in the notes for Adam's Plan page. Also beware that DORF used type families and some sugar around them. That had the effect of generating overlapping family instances in some cases -- which was OK, because they were confluent. But if I understand correctly what Richard E is working on http://hackage.haskell.org/trac/ghc/wiki/NewAxioms overlapping stand-alone family instances are going to be banished -- even if confluent. So today I would approach it by making them associated types, and including the GetResult instance inside the `Has`, so having a separate (non-overlapping) instance for each combination of record and field (Symbol). HTH. Is Adam regretting taking up the challenge yet? ;-) AntC

On Tue, Jul 2, 2013 at 4:53 AM, AntC
I was envisaging that we might well need a functional dependency
Hi Adam, Edward, (Simon),
I think we should be really careful before introducing FunDeps (or type functions).
Can we get to the needed type inference with UndecidableInstances? Is that so bad?
That doesn't solve this problem. The issue isn't that the it is undecidable and that it could choose between two overlapping options. The issue is that there is no 'correct' instance to choose.
In the original SORF proposal, Simon deliberately avoided type functions, and for closely argued reasons: "But this approach fails for fields with higher rank types." I guess the same would apply for FunDeps.
The approach still has issues with higher kinded types when extended to include setting.
FWIW in the DORF prototype, I did use type functions. I was trying to cover a panoply of HR types, parametric polymorphic records, type-changing update [**], and all sorts; so probably too big a scope for this project.
If you're interested, it's deep in the bowels of the Implementation notes, so I could forgive anybody for tl;dr. See: http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFi elds/ImplementorsView#Type-changingupdate
In terms of the current Plan:
class Has r fld t where getFld :: r -> GetResult r fld t
Of course where the record and field do determine the result, the GetResult instance can simply ignore its third argument. But for HR types, this allows the `Has` instance to 'improve' `t` through Eq constraints, _and_then_ pass that to GetResult.
In the 'chained' accessors that Edward raises, I think the presence of the type function 'fools' type inference into thinking there's a dependency.
There really is a dependency. If the input record type doesn't determine the output value type that has to be passed to the next field accessor (or vice versa) then there is *no* known type for the intermediate value type. You can't punt it to the use site.
So (foo . bar) has type (and abusing notation):
( Has r "bar" t_bar, Has (GetResult r "bar" t_bar) "foo" t_foo ) => r -> (GetResult (GetResult r "bar" t_bar) "foo" t_foo)
The extra parameter actually makes coverage even harder to determine and it makes instance selection almost impossible as it will in almost all cases be under-determined, and since we're playing games with type application, not even manually able to be applied!
[**] Beware that the DORF approach didn't support type-changing update in all cases, for reasons included in the notes for Adam's Plan page.
Also beware that DORF used type families and some sugar around them. That had the effect of generating overlapping family instances in some cases -- which was OK, because they were confluent. But if I understand correctly what Richard E is working on http://hackage.haskell.org/trac/ghc/wiki/NewAxioms overlapping stand-alone family instances are going to be banished -- even if confluent.
Even with overlapping type families nothing changes. Coverage is still violated.
So today I would approach it by making them associated types, and including the GetResult instance inside the `Has`, so having a separate (non-overlapping) instance for each combination of record and field (Symbol).
HTH. Is Adam regretting taking up the challenge yet? ;-)
AntC
_______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
participants (6)
-
Adam Gundry
-
AntC
-
Barney Hilken
-
Edward Kmett
-
Simon Peyton-Jones
-
Twan van Laarhoven