GHC support for the new "record" package

For those who haven't seen this, Nikita Volkov proposed a new approach to anonymous records, which can be found in the "record" package on Hackage: http://hackage.haskell.org/package/record It had a *lot* of attention on Reddit: http://nikita-volkov.github.io/record/ Now, the solution is very nice and lightweight, but because it is implemented outside GHC it relies on quasi-quotation (amazing that it can be done at all!). It has some limitations because it needs to parse Haskell syntax, and Haskell is big. So we could make this a lot smoother, both for the implementation and the user, by directly supporting anonymous record syntax in GHC. Obviously we'd have to move the library code into base too. This message is by way of kicking off the discussion, since nobody else seems to have done so yet. Can we agree that this is the right thing and should be directly supported by GHC? At this point we'd be aiming for 7.12. Who is interested in working on this? Nikita? There are various design decisions to think about. For example, when the quasi-quote brackets are removed, the syntax will conflict with the existing record syntax. The syntax ends up being similar to Simon's 2003 proposal http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html (there are major differences though, notably the use of lenses for selection and update). I created a template wiki page: https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov Cheers, Simon

How would that be different from the ORF? The library as it stands is worse than the ORF: translation into (essentially) tuples hurts error messages; no good story for sum types; no way to make fields strict/unpacked etc. Hopefully, if this is to become a ghc extension, these problems will be addressed; but then I don't see much difference with the ORF (and thus it wouldn't be any easier to implement). On 20/01/15 23:44, Simon Marlow wrote:
For those who haven't seen this, Nikita Volkov proposed a new approach to anonymous records, which can be found in the "record" package on Hackage: http://hackage.haskell.org/package/record
It had a *lot* of attention on Reddit: http://nikita-volkov.github.io/record/
Now, the solution is very nice and lightweight, but because it is implemented outside GHC it relies on quasi-quotation (amazing that it can be done at all!). It has some limitations because it needs to parse Haskell syntax, and Haskell is big. So we could make this a lot smoother, both for the implementation and the user, by directly supporting anonymous record syntax in GHC. Obviously we'd have to move the library code into base too.
This message is by way of kicking off the discussion, since nobody else seems to have done so yet. Can we agree that this is the right thing and should be directly supported by GHC? At this point we'd be aiming for 7.12.
Who is interested in working on this? Nikita?
There are various design decisions to think about. For example, when the quasi-quote brackets are removed, the syntax will conflict with the existing record syntax. The syntax ends up being similar to Simon's 2003 proposal http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html (there are major differences though, notably the use of lenses for selection and update).
I created a template wiki page: https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov
Cheers, Simon _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

I'm generally positive on the goal of figuring out better record support in
GHC.
That said, it isn't clear that Nikita's work here directly gives rise to
how the syntax of such a thing would work in GHC proper. Simon's original
proposal overloaded (.) in yet more ways that collide with the uses in lens
and really drastically contribute to confusion in the language we have.
This is why over the summer of 2013 Adam Gundry's proposal evolved away
from that design. Nikita on the other hand gets away with using foo.bar
syntax in a more lens-like fashion precisely because he has a quasi-quoter
isolating it from the rest of the language.
If you strip away that layer, it isn't clear what syntactic mechanism can
be used to convey the distinction that isn't taken or just as obtrusive as
the quasi-quoter.
But, "it isn't clear" is just code for "hey this makes me nervous", so
let's spitball a couple ideas:
Nikita's proposal has two things that need addressing:
1.) The syntax for record types themselves
2.) The syntax for how to get a lens for a field
Re #1
The main term and type level bits of syntax that could be coopted that
aren't already in use are @ and (~ at the term level) and things like
banana brackets (| ... |), while that already has some other, unrelated,
connotations for folks, something related like {| ... |}. We use such
bananas for our row types in Ermine to good effect.
The latter {| ... |} might serve as a solid syntax suggestion for the
anonymous row type syntax.
Re #2
That leaves the means for how to talk about a lens for a given field open.
Under Adam's proposal that had evolved into making a really complicated
instance that we could extract a lens from. This had the benefit over the
current state of the `record` package that we could support full type
changing lenses. Losing type-changing assignment would be a big step back
from the previous proposal / the current state of development for folks who
just use makeClassy or custom lens production rules with lens to get
something similar, though.
But the thing we never found was a nice short syntax for talking about the
lens you get from a given field (or possibly chain of fields); Gundry's
solution was 90% library and almost no syntax. On the other hand Adam was
shackled by having to let the accessor be used as a normal function as well
as a lens. Nikita's records don't have that problem.
Having no syntax at all for extracting the lens from a field accessor, but
rather to having it just be the lens, could directly address that concern.
This raises some questions about scope, where do these names live? What
happens when you have a module A that defines a record with a field, and a
module B that does the same for a different record, and a module C that
imports both, but, really, we had those before with Adam's proposal, so
there is nothing new there.
And for what it is worth, I've seen users in the wild using makeLenses on
records with several hundred fields (!!), so we'd need to figure out
something that doesn't cap a record at 24 fields. This feedback came in
because we made the lenses lazier and it caused some folks a great deal of
pain in terms of time spent in code gen!
It is a long trek from "this is plausible" to "hey, let's bet the future of
records and bunch of syntax in the language on this".
It would also necessarily entail moving a middling-sized chunk of lens into
base so that you can actually do something with these accessors. I've been
trying to draw lines around a "lens-core" for multiple years now. Everyone
has a different belief of what it should be, and trust me, I've heard, and
had to shoot down basically all of the pitches.
We're going to be stuck with the warts of whatever solution we build for a
very long time.
So with those caveats in mind, I'd encourage us to take our time rather
than rush into trying to get this 7.12 ready.
Personally I would be happy if by the time we ship 7.12 we had a plan for
how we could proceed, that we could then judge on its merits.
-Edward
On Tue, Jan 20, 2015 at 4:44 PM, Simon Marlow
For those who haven't seen this, Nikita Volkov proposed a new approach to anonymous records, which can be found in the "record" package on Hackage: http://hackage.haskell.org/package/record
It had a *lot* of attention on Reddit: http://nikita-volkov.github. io/record/
Now, the solution is very nice and lightweight, but because it is implemented outside GHC it relies on quasi-quotation (amazing that it can be done at all!). It has some limitations because it needs to parse Haskell syntax, and Haskell is big. So we could make this a lot smoother, both for the implementation and the user, by directly supporting anonymous record syntax in GHC. Obviously we'd have to move the library code into base too.
This message is by way of kicking off the discussion, since nobody else seems to have done so yet. Can we agree that this is the right thing and should be directly supported by GHC? At this point we'd be aiming for 7.12.
Who is interested in working on this? Nikita?
There are various design decisions to think about. For example, when the quasi-quote brackets are removed, the syntax will conflict with the existing record syntax. The syntax ends up being similar to Simon's 2003 proposal http://research.microsoft.com/en-us/um/people/simonpj/ Haskell/records.html (there are major differences though, notably the use of lenses for selection and update).
I created a template wiki page: https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov
Cheers, Simon _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

On 20/01/2015 23:07, Edward Kmett wrote:
It is a long trek from "this is plausible" to "hey, let's bet the future of records and bunch of syntax in the language on this".
Absolutely. On the other hand, this is the first proposal I've seen that really hits (for me) a point in the design space that has an acceptable power to weight ratio. Yes there are some corners cut, and it remains to be seen whether, after we've decided which corners we want to uncut, the design retains the same P2W ratio. A couple of answers to specific points:
Re #1
The main term and type level bits of syntax that could be coopted that aren't already in use are @ and (~ at the term level) and things like banana brackets (| ... |), while that already has some other, unrelated, connotations for folks, something related like {| ... |}. We use such bananas for our row types in Ermine to good effect.
The latter {| ... |} might serve as a solid syntax suggestion for the anonymous row type syntax.
Why not just use { ... } ?
Re #2
That leaves the means for how to talk about a lens for a given field open. Under Adam's proposal that had evolved into making a really complicated instance that we could extract a lens from. This had the benefit over the current state of the `record` package that we could support full type changing lenses. Losing type-changing assignment would be a big step back from the previous proposal / the current state of development for folks who just use makeClassy or custom lens production rules with lens to get something similar, though.
But the thing we never found was a nice short syntax for talking about the lens you get from a given field (or possibly chain of fields); Gundry's solution was 90% library and almost no syntax. On the other hand Adam was shackled by having to let the accessor be used as a normal function as well as a lens. Nikita's records don't have that problem.
Having no syntax at all for extracting the lens from a field accessor, but rather to having it just be the lens, could directly address that concern. This raises some questions about scope, where do these names live? What happens when you have a module A that defines a record with a field, and a module B that does the same for a different record, and a module C that imports both, but, really, we had those before with Adam's proposal, so there is nothing new there.
Right. So either (a) A field name is a bare identifier that is bound to the lens, or (b) There is special syntax for the lens of a field name If (a) there needs to be a declaration of the name in order that we can talk about scoping. That makes (b) a lot more attractive; and if you really find the syntax awkward then you can always bind a local variable to the lens, or export the names from your library.
And for what it is worth, I've seen users in the wild using makeLenses on records with several hundred fields (!!), so we'd need to figure out something that doesn't cap a record at 24 fields. This feedback came in because we made the lenses lazier and it caused some folks a great deal of pain in terms of time spent in code gen!
Sure. We deal with large tuples by nesting, and I imagine something similar could be done for records (I haven't worked through the details though). Cheers, Simmon
It would also necessarily entail moving a middling-sized chunk of lens into base so that you can actually do something with these accessors. I've been trying to draw lines around a "lens-core" for multiple years now. Everyone has a different belief of what it should be, and trust me, I've heard, and had to shoot down basically all of the pitches.
We're going to be stuck with the warts of whatever solution we build for a very long time.
So with those caveats in mind, I'd encourage us to take our time rather than rush into trying to get this 7.12 ready.
Personally I would be happy if by the time we ship 7.12 we had a plan for how we could proceed, that we could then judge on its merits.
-Edward
On Tue, Jan 20, 2015 at 4:44 PM, Simon Marlow
mailto:marlowsd@gmail.com> wrote: For those who haven't seen this, Nikita Volkov proposed a new approach to anonymous records, which can be found in the "record" package on Hackage: http://hackage.haskell.org/__package/record http://hackage.haskell.org/package/record
It had a *lot* of attention on Reddit: http://nikita-volkov.github.__io/record/ http://nikita-volkov.github.io/record/
Now, the solution is very nice and lightweight, but because it is implemented outside GHC it relies on quasi-quotation (amazing that it can be done at all!). It has some limitations because it needs to parse Haskell syntax, and Haskell is big. So we could make this a lot smoother, both for the implementation and the user, by directly supporting anonymous record syntax in GHC. Obviously we'd have to move the library code into base too.
This message is by way of kicking off the discussion, since nobody else seems to have done so yet. Can we agree that this is the right thing and should be directly supported by GHC? At this point we'd be aiming for 7.12.
Who is interested in working on this? Nikita?
There are various design decisions to think about. For example, when the quasi-quote brackets are removed, the syntax will conflict with the existing record syntax. The syntax ends up being similar to Simon's 2003 proposal http://research.microsoft.com/__en-us/um/people/simonpj/__Haskell/records.ht...
http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html
(there are major differences though, notably the use of lenses for selection and update).
I created a template wiki page: https://ghc.haskell.org/trac/__ghc/wiki/Records/Volkov https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov
Cheers, Simon _________________________________________________ ghc-devs mailing list ghc-devs@haskell.org mailto:ghc-devs@haskell.org http://www.haskell.org/__mailman/listinfo/ghc-devs http://www.haskell.org/mailman/listinfo/ghc-devs

On Wed, Jan 21, 2015 at 4:36 AM, Simon Marlow
On 20/01/2015 23:07, Edward Kmett wrote:
It is a long trek from "this is plausible" to "hey, let's bet the
future of records and bunch of syntax in the language on this".
Absolutely. On the other hand, this is the first proposal I've seen that really hits (for me) a point in the design space that has an acceptable power to weight ratio. Yes there are some corners cut, and it remains to be seen whether, after we've decided which corners we want to uncut, the design retains the same P2W ratio.
A couple of answers to specific points:
Re #1
The main term and type level bits of syntax that could be coopted that aren't already in use are @ and (~ at the term level) and things like banana brackets (| ... |), while that already has some other, unrelated, connotations for folks, something related like {| ... |}. We use such bananas for our row types in Ermine to good effect.
The latter {| ... |} might serve as a solid syntax suggestion for the anonymous row type syntax.
Why not just use { ... } ?
Mostly because it would conflict with the existing record syntax when used as a member of a data type. Using { ... } would break all existing code, while {| ... |} could peacefully co-exist. data Foo = Foo { bar :: Bar } vs. data Foo = Foo {| bar :: Bar |} You could, I suppose manually distinguish them using ()'s data Foo = Foo ({bar :: Bar }) might be something folks could grow to accept. Another reason that comes to mind is that it causes a further divergence between the way terms and types behave/look, complicated stuff like Richard Eisenberg's work on giving us something closer to real dependent types. Re #2
That leaves the means for how to talk about a lens for a given field open. Under Adam's proposal that had evolved into making a really complicated instance that we could extract a lens from. This had the benefit over the current state of the `record` package that we could support full type changing lenses. Losing type-changing assignment would be a big step back from the previous proposal / the current state of development for folks who just use makeClassy or custom lens production rules with lens to get something similar, though.
But the thing we never found was a nice short syntax for talking about the lens you get from a given field (or possibly chain of fields); Gundry's solution was 90% library and almost no syntax. On the other hand Adam was shackled by having to let the accessor be used as a normal function as well as a lens. Nikita's records don't have that problem.
Having no syntax at all for extracting the lens from a field accessor, but rather to having it just be the lens, could directly address that concern. This raises some questions about scope, where do these names live? What happens when you have a module A that defines a record with a field, and a module B that does the same for a different record, and a module C that imports both, but, really, we had those before with Adam's proposal, so there is nothing new there.
Right. So either (a) A field name is a bare identifier that is bound to the lens, or (b) There is special syntax for the lens of a field name
If (a) there needs to be a declaration of the name in order that we can talk about scoping. That makes (b) a lot more attractive; and if you really find the syntax awkward then you can always bind a local variable to the lens, or export the names from your library.
Alternately (c) we could play games with ensuring the "name" is shared despite coming from different fields. As a half-baked idea, if we pretended all field accessors were names from some magic internal GHC.Record.Fields module, so that using data Foo = Foo {| bar :: Bar, baz :: Baz |} would add an `import GHC.Record.Fields (bar, baz)` to the module. These would all expand to the same Symbol-based representation, behind the scenes, so that if two record types were used that used the same names, they'd just work together, with no scoping issues. This has the benefit that users could write such import statements by hand to use fields themselves, no sigils get used up, and the resulting code is the cleanest it can be. -Edward

The latter {| ... |} might serve as a solid syntax suggestion for the anonymous row type syntax.
I like this well enough. My Hugs TRex suggestion comes from not particularly caring much what characters are used to delimit some fields, but that using an existing implementation's design decisions allows one to avoid going through bikesheddery. #x like in TRex also WFM too. Personally I could care less whether it matches OO languages x.y or not.

As someone with quite a lot of skin in this game, I thought it might be useful to give my perspective on how this relates to ORF. Apologies that this drags on a bit... Both approaches use essentially the same mechanism for resolving overloaded field names (typeclasses indexed by type-level strings, called Has/Upd or FieldOwner). ORF allows fields to be both selectors and various types of lenses, whereas the record library always makes them van Laarhoven lenses, but this isn't really a fundamental difference. The crucial difference is that ORF adds no new syntax, and solves Has/Upd constraints for existing datatypes, whereas the record library adds a new syntax for anonymous records and their fields that is completely separate from existing datatypes, and solves FieldOwner constraints only for these anonymous records (well, their desugaring). On the one hand, anonymous records are a very desirable feature, and in some ways making them separate is a nice simplification. However, they are not as expressive as the existing Haskell record datatypes (sums, strict/unpacked fields, higher-rank fields), and having two records mechanisms is a little unsatisfying. Do we really want to distinguish data Foo = MkFoo { bar :: Int, baz :: Bool } data Foo = MkFoo {| bar :: Int, baz :: Bool |} (where the first is the traditional approach, and the second is a single-argument constructor taking an anonymous record in Edward's proposed syntax)? It might be nice to have a syntactic distinction between record fields and normal functions (the [l|...|] in the record library), because it makes name resolution much simpler. ORF was going down the route of adding no syntax, so name resolution becomes more complex, but we could revisit that decision and perhaps make ORF simpler. But I don't know what the syntax should be. I would note that if we go ahead with ORF, the record library could potentially take advantage of it (working with ORF's Has/Upd classes instead of defining its own FieldOwner class). Then we could subsequently add anonymous records to GHC if there is enough interest and implementation effort. However, I don't want to limit the discussion: if there's consensus that ORF is not the right approach, then I'm happy to let it go the way of all the earth. ;-) (Regarding the status of ORF, Simon PJ and I had a useful meeting last week where we identified a plan for getting it back on track, including some key simplifications to the sticking points in the implementation. So there might be some hope for getting it in after all.) Adam On 20/01/15 21:44, Simon Marlow wrote:
For those who haven't seen this, Nikita Volkov proposed a new approach to anonymous records, which can be found in the "record" package on Hackage: http://hackage.haskell.org/package/record
It had a *lot* of attention on Reddit: http://nikita-volkov.github.io/record/
Now, the solution is very nice and lightweight, but because it is implemented outside GHC it relies on quasi-quotation (amazing that it can be done at all!). It has some limitations because it needs to parse Haskell syntax, and Haskell is big. So we could make this a lot smoother, both for the implementation and the user, by directly supporting anonymous record syntax in GHC. Obviously we'd have to move the library code into base too.
This message is by way of kicking off the discussion, since nobody else seems to have done so yet. Can we agree that this is the right thing and should be directly supported by GHC? At this point we'd be aiming for 7.12.
Who is interested in working on this? Nikita?
There are various design decisions to think about. For example, when the quasi-quote brackets are removed, the syntax will conflict with the existing record syntax. The syntax ends up being similar to Simon's 2003 proposal http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html (there are major differences though, notably the use of lenses for selection and update).
I created a template wiki page: https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov
Cheers, Simon
-- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

Personally, I think the two proposals, ORF and Nikita's record approach
address largely differing needs.
The ORF proposal has the benefit that it doesn't require GHC itself to know
anything about lenses in order to work and is mostly compatible with the
existing field accessor combinators.
Nikita's proposal on the other hand builds a form of Trex-like records
where it has its own little universe to play in, and doesn't have to
contort itself to make the field accessors backwards compatible. As its own
little world, the fact that the ORF can't deal with certain types of fields
just becomes a limitation on this little universe, and all existing code
would continue to work.
I, too, have a lot of skin in the game with the existing ORF proposal, but
ultimately we're going to be stuck with whatever solution we build for a
long time, and it is, we both have to confess, admittedly quite
complicated, so it seems exploring the consequences of a related design
which has different constraints on its design does little harm.
I'm mostly paying the work the courtesy it deserves by considering to its
logical conclusion what such a design would look like fleshed out in a way
that maximized how nice the result could be to use. I'm curious, as mostly
a thought experiment, how nice a design we could get in the end under these
slightly different assumptions.
If, in the end, having an anonymous record syntax that is distinct from the
existing one is too ugly, it is okay for us to recoil from it and go back
to committing to the existing proposal, but I for one would prefer to "
steelman https://themerelyreal.wordpress.com/2012/12/07/steelmanning/"
Nikita's trick first.
Thus far, all of this is but words in a handful of emails. I happen to
think the existing ORF implementation is about as good as we can get
operating under the assumptions it does. That said, operating under
different assumptions may get us a nicer user experience. I'm not sure,
though, hence the thought experiment.
-Edward
On Wed, Jan 21, 2015 at 5:05 AM, Adam Gundry
As someone with quite a lot of skin in this game, I thought it might be useful to give my perspective on how this relates to ORF. Apologies that this drags on a bit...
Both approaches use essentially the same mechanism for resolving overloaded field names (typeclasses indexed by type-level strings, called Has/Upd or FieldOwner). ORF allows fields to be both selectors and various types of lenses, whereas the record library always makes them van Laarhoven lenses, but this isn't really a fundamental difference.
The crucial difference is that ORF adds no new syntax, and solves Has/Upd constraints for existing datatypes, whereas the record library adds a new syntax for anonymous records and their fields that is completely separate from existing datatypes, and solves FieldOwner constraints only for these anonymous records (well, their desugaring).
On the one hand, anonymous records are a very desirable feature, and in some ways making them separate is a nice simplification. However, they are not as expressive as the existing Haskell record datatypes (sums, strict/unpacked fields, higher-rank fields), and having two records mechanisms is a little unsatisfying. Do we really want to distinguish
data Foo = MkFoo { bar :: Int, baz :: Bool } data Foo = MkFoo {| bar :: Int, baz :: Bool |}
(where the first is the traditional approach, and the second is a single-argument constructor taking an anonymous record in Edward's proposed syntax)?
It might be nice to have a syntactic distinction between record fields and normal functions (the [l|...|] in the record library), because it makes name resolution much simpler. ORF was going down the route of adding no syntax, so name resolution becomes more complex, but we could revisit that decision and perhaps make ORF simpler. But I don't know what the syntax should be.
I would note that if we go ahead with ORF, the record library could potentially take advantage of it (working with ORF's Has/Upd classes instead of defining its own FieldOwner class). Then we could subsequently add anonymous records to GHC if there is enough interest and implementation effort. However, I don't want to limit the discussion: if there's consensus that ORF is not the right approach, then I'm happy to let it go the way of all the earth. ;-)
(Regarding the status of ORF, Simon PJ and I had a useful meeting last week where we identified a plan for getting it back on track, including some key simplifications to the sticking points in the implementation. So there might be some hope for getting it in after all.)
Adam
On 20/01/15 21:44, Simon Marlow wrote:
For those who haven't seen this, Nikita Volkov proposed a new approach to anonymous records, which can be found in the "record" package on Hackage: http://hackage.haskell.org/package/record
It had a *lot* of attention on Reddit: http://nikita-volkov.github.io/record/
Now, the solution is very nice and lightweight, but because it is implemented outside GHC it relies on quasi-quotation (amazing that it can be done at all!). It has some limitations because it needs to parse Haskell syntax, and Haskell is big. So we could make this a lot smoother, both for the implementation and the user, by directly supporting anonymous record syntax in GHC. Obviously we'd have to move the library code into base too.
This message is by way of kicking off the discussion, since nobody else seems to have done so yet. Can we agree that this is the right thing and should be directly supported by GHC? At this point we'd be aiming for 7.12.
Who is interested in working on this? Nikita?
There are various design decisions to think about. For example, when the quasi-quote brackets are removed, the syntax will conflict with the existing record syntax. The syntax ends up being similar to Simon's 2003 proposal
http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html
(there are major differences though, notably the use of lenses for selection and update).
I created a template wiki page: https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov
Cheers, Simon
-- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

My thoughts mostly mirror those of Adam and Edward.
1) I want something that is backwards compatible.
2) Anonymous records are nice to have, but I don't want to have all records
be anonymous (and have to jump through newtype hoops to get back
non-anonymous records.)
3) I don't think it's a good idea to have lots of functions be polymorphic
in the record types of their arguments. If that falls out for free (like it
does both in ORF and Nikita's proposals) that's nice, but I think anonymous
records should be used sparsely.
To me, anonymous records look a lot like Go's interfaces, which structural
typing I don't think is a great idea. Go's interfaces give the appearance
of giving you more polymorphic functions (i.e. functions with arguments of
type { f :: T, ... }), but you have to express the required laws on these
record fields purely in terms of comments. With type class-based
polymorphism you're somewhat more specific and deliberate when you state
what kind of values your functions are polymorphic over. You don't just say
"this value must support a function f :: T" but instead "this value must
support a function f :: T, where the behavior of f is specified by the type
class it's defined in". I also have extensive experience of duck typing
from Python and there I think duck typing has not played out well (somewhat
collaborate by the fact that Python is adding base classes so it's possible
to talk about the laws I mentioned above.)
4) There are issues with strictness and unpacking.
5) I don't know if I want to commit the *language* to a particular lens
type.
On Wed, Jan 21, 2015 at 2:11 PM, Edward Kmett
Personally, I think the two proposals, ORF and Nikita's record approach address largely differing needs.
The ORF proposal has the benefit that it doesn't require GHC itself to know anything about lenses in order to work and is mostly compatible with the existing field accessor combinators.
Nikita's proposal on the other hand builds a form of Trex-like records where it has its own little universe to play in, and doesn't have to contort itself to make the field accessors backwards compatible. As its own little world, the fact that the ORF can't deal with certain types of fields just becomes a limitation on this little universe, and all existing code would continue to work.
I, too, have a lot of skin in the game with the existing ORF proposal, but ultimately we're going to be stuck with whatever solution we build for a long time, and it is, we both have to confess, admittedly quite complicated, so it seems exploring the consequences of a related design which has different constraints on its design does little harm.
I'm mostly paying the work the courtesy it deserves by considering to its logical conclusion what such a design would look like fleshed out in a way that maximized how nice the result could be to use. I'm curious, as mostly a thought experiment, how nice a design we could get in the end under these slightly different assumptions.
If, in the end, having an anonymous record syntax that is distinct from the existing one is too ugly, it is okay for us to recoil from it and go back to committing to the existing proposal, but I for one would prefer to " steelman https://themerelyreal.wordpress.com/2012/12/07/steelmanning/" Nikita's trick first.
Thus far, all of this is but words in a handful of emails. I happen to think the existing ORF implementation is about as good as we can get operating under the assumptions it does. That said, operating under different assumptions may get us a nicer user experience. I'm not sure, though, hence the thought experiment.
-Edward
On Wed, Jan 21, 2015 at 5:05 AM, Adam Gundry
wrote: As someone with quite a lot of skin in this game, I thought it might be useful to give my perspective on how this relates to ORF. Apologies that this drags on a bit...
Both approaches use essentially the same mechanism for resolving overloaded field names (typeclasses indexed by type-level strings, called Has/Upd or FieldOwner). ORF allows fields to be both selectors and various types of lenses, whereas the record library always makes them van Laarhoven lenses, but this isn't really a fundamental difference.
The crucial difference is that ORF adds no new syntax, and solves Has/Upd constraints for existing datatypes, whereas the record library adds a new syntax for anonymous records and their fields that is completely separate from existing datatypes, and solves FieldOwner constraints only for these anonymous records (well, their desugaring).
On the one hand, anonymous records are a very desirable feature, and in some ways making them separate is a nice simplification. However, they are not as expressive as the existing Haskell record datatypes (sums, strict/unpacked fields, higher-rank fields), and having two records mechanisms is a little unsatisfying. Do we really want to distinguish
data Foo = MkFoo { bar :: Int, baz :: Bool } data Foo = MkFoo {| bar :: Int, baz :: Bool |}
(where the first is the traditional approach, and the second is a single-argument constructor taking an anonymous record in Edward's proposed syntax)?
It might be nice to have a syntactic distinction between record fields and normal functions (the [l|...|] in the record library), because it makes name resolution much simpler. ORF was going down the route of adding no syntax, so name resolution becomes more complex, but we could revisit that decision and perhaps make ORF simpler. But I don't know what the syntax should be.
I would note that if we go ahead with ORF, the record library could potentially take advantage of it (working with ORF's Has/Upd classes instead of defining its own FieldOwner class). Then we could subsequently add anonymous records to GHC if there is enough interest and implementation effort. However, I don't want to limit the discussion: if there's consensus that ORF is not the right approach, then I'm happy to let it go the way of all the earth. ;-)
(Regarding the status of ORF, Simon PJ and I had a useful meeting last week where we identified a plan for getting it back on track, including some key simplifications to the sticking points in the implementation. So there might be some hope for getting it in after all.)
Adam
On 20/01/15 21:44, Simon Marlow wrote:
For those who haven't seen this, Nikita Volkov proposed a new approach to anonymous records, which can be found in the "record" package on Hackage: http://hackage.haskell.org/package/record
It had a *lot* of attention on Reddit: http://nikita-volkov.github.io/record/
Now, the solution is very nice and lightweight, but because it is implemented outside GHC it relies on quasi-quotation (amazing that it can be done at all!). It has some limitations because it needs to parse Haskell syntax, and Haskell is big. So we could make this a lot smoother, both for the implementation and the user, by directly supporting anonymous record syntax in GHC. Obviously we'd have to move the library code into base too.
This message is by way of kicking off the discussion, since nobody else seems to have done so yet. Can we agree that this is the right thing and should be directly supported by GHC? At this point we'd be aiming for 7.12.
Who is interested in working on this? Nikita?
There are various design decisions to think about. For example, when the quasi-quote brackets are removed, the syntax will conflict with the existing record syntax. The syntax ends up being similar to Simon's 2003 proposal
http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html
(there are major differences though, notably the use of lenses for selection and update).
I created a template wiki page: https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov
Cheers, Simon
-- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

On 21/01/2015 16:01, Johan Tibell wrote:
My thoughts mostly mirror those of Adam and Edward.
1) I want something that is backwards compatible.
Backwards compatible in what sense? Extension flags provide backwards compatibility, because you just don't turn on the extension until you want to use it. That's how all the other extensions work; most of them change syntax in some way or other that breaks existing code.
2) Anonymous records are nice to have, but I don't want to have all records be anonymous (and have to jump through newtype hoops to get back non-anonymous records.)
So right now you have to say data T = R { a :: Int } and with anonymous records you could say data T = R {| a :: Int |} (or something similar). That doesn't seem like jumping through hoops, it's exactly the same amount of syntax. If you're worried about the extra layer of boxing (quite reasonable) then either (a) use a newtype, if possible, or (b) we could consider automatic UNPACKing of records used in a constructor argument.
3) I don't think it's a good idea to have lots of functions be polymorphic in the record types of their arguments. If that falls out for free (like it does both in ORF and Nikita's proposals) that's nice, but I think anonymous records should be used sparsely.
There are stylistic issues with the use of anonymous records, I agree. But I don't consider anonymous records to be the main feature here, it's just a nice way to factor the design. [..]
4) There are issues with strictness and unpacking.
Yes - probably the major drawbacks, along with the lack of type-changing updates.
5) I don't know if I want to commit the *language* to a particular lens type.
Fair point. Cheers, Simon
On Wed, Jan 21, 2015 at 2:11 PM, Edward Kmett
mailto:ekmett@gmail.com> wrote: Personally, I think the two proposals, ORF and Nikita's record approach address largely differing needs.
The ORF proposal has the benefit that it doesn't require GHC itself to know anything about lenses in order to work and is mostly compatible with the existing field accessor combinators.
Nikita's proposal on the other hand builds a form of Trex-like records where it has its own little universe to play in, and doesn't have to contort itself to make the field accessors backwards compatible. As its own little world, the fact that the ORF can't deal with certain types of fields just becomes a limitation on this little universe, and all existing code would continue to work.
I, too, have a lot of skin in the game with the existing ORF proposal, but ultimately we're going to be stuck with whatever solution we build for a long time, and it is, we both have to confess, admittedly quite complicated, so it seems exploring the consequences of a related design which has different constraints on its design does little harm.
I'm mostly paying the work the courtesy it deserves by considering to its logical conclusion what such a design would look like fleshed out in a way that maximized how nice the result could be to use. I'm curious, as mostly a thought experiment, how nice a design we could get in the end under these slightly different assumptions.
If, in the end, having an anonymous record syntax that is distinct from the existing one is too ugly, it is okay for us to recoil from it and go back to committing to the existing proposal, but I for one would prefer to "steelman https://themerelyreal.wordpress.com/2012/12/07/steelmanning/" Nikita's trick first.
Thus far, all of this is but words in a handful of emails. I happen to think the existing ORF implementation is about as good as we can get operating under the assumptions it does. That said, operating under different assumptions may get us a nicer user experience. I'm not sure, though, hence the thought experiment.
-Edward
On Wed, Jan 21, 2015 at 5:05 AM, Adam Gundry
mailto:adam@well-typed.com> wrote: As someone with quite a lot of skin in this game, I thought it might be useful to give my perspective on how this relates to ORF. Apologies that this drags on a bit...
Both approaches use essentially the same mechanism for resolving overloaded field names (typeclasses indexed by type-level strings, called Has/Upd or FieldOwner). ORF allows fields to be both selectors and various types of lenses, whereas the record library always makes them van Laarhoven lenses, but this isn't really a fundamental difference.
The crucial difference is that ORF adds no new syntax, and solves Has/Upd constraints for existing datatypes, whereas the record library adds a new syntax for anonymous records and their fields that is completely separate from existing datatypes, and solves FieldOwner constraints only for these anonymous records (well, their desugaring).
On the one hand, anonymous records are a very desirable feature, and in some ways making them separate is a nice simplification. However, they are not as expressive as the existing Haskell record datatypes (sums, strict/unpacked fields, higher-rank fields), and having two records mechanisms is a little unsatisfying. Do we really want to distinguish
data Foo = MkFoo { bar :: Int, baz :: Bool } data Foo = MkFoo {| bar :: Int, baz :: Bool |}
(where the first is the traditional approach, and the second is a single-argument constructor taking an anonymous record in Edward's proposed syntax)?
It might be nice to have a syntactic distinction between record fields and normal functions (the [l|...|] in the record library), because it makes name resolution much simpler. ORF was going down the route of adding no syntax, so name resolution becomes more complex, but we could revisit that decision and perhaps make ORF simpler. But I don't know what the syntax should be.
I would note that if we go ahead with ORF, the record library could potentially take advantage of it (working with ORF's Has/Upd classes instead of defining its own FieldOwner class). Then we could subsequently add anonymous records to GHC if there is enough interest and implementation effort. However, I don't want to limit the discussion: if there's consensus that ORF is not the right approach, then I'm happy to let it go the way of all the earth. ;-)
(Regarding the status of ORF, Simon PJ and I had a useful meeting last week where we identified a plan for getting it back on track, including some key simplifications to the sticking points in the implementation. So there might be some hope for getting it in after all.)
Adam
On 20/01/15 21:44, Simon Marlow wrote: > For those who haven't seen this, Nikita Volkov proposed a new approach > to anonymous records, which can be found in the "record" package on > Hackage: http://hackage.haskell.org/package/record > > It had a *lot* of attention on Reddit: > http://nikita-volkov.github.io/record/ > > Now, the solution is very nice and lightweight, but because it is > implemented outside GHC it relies on quasi-quotation (amazing that it > can be done at all!). It has some limitations because it needs to parse > Haskell syntax, and Haskell is big. So we could make this a lot > smoother, both for the implementation and the user, by directly > supporting anonymous record syntax in GHC. Obviously we'd have to move > the library code into base too. > > This message is by way of kicking off the discussion, since nobody else > seems to have done so yet. Can we agree that this is the right thing > and should be directly supported by GHC? At this point we'd be aiming > for 7.12. > > Who is interested in working on this? Nikita? > > There are various design decisions to think about. For example, when > the quasi-quote brackets are removed, the syntax will conflict with the > existing record syntax. The syntax ends up being similar to Simon's > 2003 proposal > http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html > (there are major differences though, notably the use of lenses for > selection and update). > > I created a template wiki page: > https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov > > Cheers, > Simon
-- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org mailto:ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org mailto:ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

On Wed, Jan 21, 2015 at 11:48 AM, Simon Marlow
2) Anonymous records are nice to have, but I don't want to have all
records be anonymous (and have to jump through newtype hoops to get back non-anonymous records.)
So right now you have to say
data T = R { a :: Int }
and with anonymous records you could say
data T = R {| a :: Int |}
(or something similar). That doesn't seem like jumping through hoops, it's exactly the same amount of syntax. If you're worried about the extra layer of boxing (quite reasonable) then either (a) use a newtype, if possible, or (b) we could consider automatic UNPACKing of records used in a constructor argument.
In the first of these declarations, we automatically have: a :: T -> Int Do we get that automatically in the second? Or do we have to write out all the field accessors, etc. we want to derive on T (and, can we do that)? I assume things like that are the concern for "non-anonymous records." -- Dan

[I should say, in case it wasn't clear from my previous email, that I'm impressed by Nikita's work, excited to see this discussion revived, and very keen to find the best solution, whether that builds on ORF or not. Anyway, back down the rabbit hole...] On 21/01/15 16:48, Simon Marlow wrote:
On 21/01/2015 16:01, Johan Tibell wrote:
My thoughts mostly mirror those of Adam and Edward.
1) I want something that is backwards compatible.
Backwards compatible in what sense? Extension flags provide backwards compatibility, because you just don't turn on the extension until you want to use it. That's how all the other extensions work; most of them change syntax in some way or other that breaks existing code.
Well, it's nice if turning on an extension flag doesn't break existing code (as far as possible, and stolen keywords etc. excepted). In ORF-as-is, this is mostly true, except for corner cases involving higher-rank fields or very polymorphic code. I think it's something to aim for in any records design, anonymous or not.
4) There are issues with strictness and unpacking.
Yes - probably the major drawbacks, along with the lack of type-changing updates.
Is there any reason why Nikita's proposal couldn't be extended to support type-changing update, just like ORF? Though the cases in which it can and cannot apply are inevitably subtle. Also, I'd add fields with higher-rank types to the list of missing features. From a user's perspective, it might seem a bit odd if data T = MkT { foo :: forall a . a } was fine but data T = MkT {| foo :: forall a . a |} would not be a valid declaration. (Of course, ORF can't overload "foo" either, and maybe this is an inevitable wart.)
5) I don't know if I want to commit the *language* to a particular lens type.
Agreed, but I don't think this need be an issue for either proposal. We know from ORF that we can make fields sufficiently polymorphic to be treated as selector functions or arbitrary types of lenses (though treating them as van Laarhoven lenses requires either some clever typeclass trickery in the base library, or using a combinator to make a field into a lens at use sites). Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

On Wed, Jan 21, 2015 at 1:06 PM, Adam Gundry
Also, I'd add fields with higher-rank types to the list of missing features. From a user's perspective, it might seem a bit odd if
data T = MkT { foo :: forall a . a }
was fine but
data T = MkT {| foo :: forall a . a |}
would not be a valid declaration. (Of course, ORF can't overload "foo" either, and maybe this is an inevitable wart.)
I'm slowly coming around to thinking that this is inevitable without a bunch of changes in the way we work with classes. You otherwise need to allow impredicative types in some contexts, which raises all sorts of questions. In the latter case we can at least be clear about why it doesn't work in the error message, in the ORF case it has to just not generate a lens. =(
5) I don't know if I want to commit the *language* to a particular lens type.
Agreed, but I don't think this need be an issue for either proposal. We know from ORF that we can make fields sufficiently polymorphic to be treated as selector functions or arbitrary types of lenses (though treating them as van Laarhoven lenses requires either some clever typeclass trickery in the base library, or using a combinator to make a field into a lens at use sites).
Admittedly that has also been the source of 90% of the complexity of the ORF proposal. There we _had_ to do this in order to support the use as a regular function. There is a large design space here, and the main thing Nikita's proposal brings to the table is slightly different assumptions about what such records should mean. This _could_ let us shed some of the rougher edges of ORF, at the price of having to lock in a notion of lenses. I'm on the fence about whether it would be a good idea to burden Nikita's proposal in the same fashion, but I think it'd be wise to explore it in both fashions. My gut feeling though is that if we tie it up with the same restrictions as ORF you just mostly get a less useful ORF with anonymous record sugar thrown in. -Edward

On 21/01/15 18:14, Edward Kmett wrote:
>> 5) I don't know if I want to commit the *language* to a particular lens >> type.
Agreed, but I don't think this need be an issue for either proposal. We know from ORF that we can make fields sufficiently polymorphic to be treated as selector functions or arbitrary types of lenses (though treating them as van Laarhoven lenses requires either some clever typeclass trickery in the base library, or using a combinator to make a field into a lens at use sites).
Admittedly that has also been the source of 90% of the complexity of the ORF proposal. There we _had_ to do this in order to support the use as a regular function.
I'm surprised and interested that you view this as a major source of complexity. From my point of view, I quite liked how the ability to overload fields as both selector functions and arbitrary lenses turned out. Compared to some of the hairy GHC internal details relating to name resolution, it feels really quite straightforward. Also, I've recently worked out how to simplify and generalise it somewhat (see [1] and [2] if you're curious).
There is a large design space here, and the main thing Nikita's proposal brings to the table is slightly different assumptions about what such records should mean. This _could_ let us shed some of the rougher edges of ORF, at the price of having to lock in a notion of lenses.
Yes. It's good to explore the options. For what it's worth, I'm sceptical about blessing a particular notion of lenses unless it's really necessary, but I'm prepared to be convinced otherwise.
I'm on the fence about whether it would be a good idea to burden Nikita's proposal in the same fashion, but I think it'd be wise to explore it in both fashions. My gut feeling though is that if we tie it up with the same restrictions as ORF you just mostly get a less useful ORF with anonymous record sugar thrown in.
I think there's a sensible story to tell about an incremental plan that starts with something like ORF and ends up with something like Nikita's anonymous records. I'll try to tell this story when I can rub a few more braincells together... Adam [1] https://github.com/adamgundry/records-prototype/blob/master/NewPrototype.hs [2] https://github.com/adamgundry/records-prototype/blob/master/CoherentPrototyp... -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

Adam, do you have any measurements for how much code gets generated for a record declaration with ORF, compared with a record declaration in GHC right now? That's one thing that has been a nagging worry for me with ORF, but I just don't have any idea if it's a real problem or not. Under Nikita's proposal, zero code is generated for a record "declaration" (since there isn't one), and the lenses are tiny expressions too. There's some boilerplate in the library, but it's generated once and for all, and isn't that huge anyway. The lightweightness of it from a code-size standpoint is very attractive. Cheers, Simon On 21/01/15 21:34, Adam Gundry wrote:
On 21/01/15 18:14, Edward Kmett wrote:
>> 5) I don't know if I want to commit the *language* to a particular lens >> type.
Agreed, but I don't think this need be an issue for either proposal. We know from ORF that we can make fields sufficiently polymorphic to be treated as selector functions or arbitrary types of lenses (though treating them as van Laarhoven lenses requires either some clever typeclass trickery in the base library, or using a combinator to make a field into a lens at use sites).
Admittedly that has also been the source of 90% of the complexity of the ORF proposal. There we _had_ to do this in order to support the use as a regular function.
I'm surprised and interested that you view this as a major source of complexity. From my point of view, I quite liked how the ability to overload fields as both selector functions and arbitrary lenses turned out. Compared to some of the hairy GHC internal details relating to name resolution, it feels really quite straightforward. Also, I've recently worked out how to simplify and generalise it somewhat (see [1] and [2] if you're curious).
There is a large design space here, and the main thing Nikita's proposal brings to the table is slightly different assumptions about what such records should mean. This _could_ let us shed some of the rougher edges of ORF, at the price of having to lock in a notion of lenses.
Yes. It's good to explore the options. For what it's worth, I'm sceptical about blessing a particular notion of lenses unless it's really necessary, but I'm prepared to be convinced otherwise.
I'm on the fence about whether it would be a good idea to burden Nikita's proposal in the same fashion, but I think it'd be wise to explore it in both fashions. My gut feeling though is that if we tie it up with the same restrictions as ORF you just mostly get a less useful ORF with anonymous record sugar thrown in.
I think there's a sensible story to tell about an incremental plan that starts with something like ORF and ends up with something like Nikita's anonymous records. I'll try to tell this story when I can rub a few more braincells together...
Adam
[1] https://github.com/adamgundry/records-prototype/blob/master/NewPrototype.hs [2] https://github.com/adamgundry/records-prototype/blob/master/CoherentPrototyp...

On 21/01/15 21:51, Simon Marlow wrote:
Adam, do you have any measurements for how much code gets generated for a record declaration with ORF, compared with a record declaration in GHC right now? That's one thing that has been a nagging worry for me with ORF, but I just don't have any idea if it's a real problem or not.
Yes, that was something that was a bit unsatisfying about the original implementation, though unfortunately I don't have hard numbers comparing the relative code sizes. But Simon PJ and I have realised that we can be much more efficient: the only things that need to be generated for record declarations are selector functions (as at present) and updater functions (one per field, per type). Everything else (typeclass dfuns, type family axioms) can be made up on-the-fly in the typechecker. So I don't think it will make a practical difference.
Under Nikita's proposal, zero code is generated for a record "declaration" (since there isn't one), and the lenses are tiny expressions too. There's some boilerplate in the library, but it's generated once and for all, and isn't that huge anyway. The lightweightness of it from a code-size standpoint is very attractive.
Agreed. I'm coming to see how much of a virtue it is to be lightweight! I'm a bit worried, however, that if we want newtype T = MkT {| foo :: Int |} x = view [l|foo|] (MkT 42) -- whatever our syntax for [l|...|] is to be well-typed, we need an instance for FieldOwner "foo" Int T to be generated somewhere (perhaps via GND), so I suspect the code generation cost for non-anonymous overloaded records is about the same as with ORF. Nikita's proposal would let you choose whether to pay that cost at declaration time, which has advantages and disadvantages, of course. Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

On Wed, Jan 21, 2015 at 4:34 PM, Adam Gundry
I'm surprised and interested that you view this as a major source of complexity. From my point of view, I quite liked how the ability to overload fields as both selector functions and arbitrary lenses turned out. Compared to some of the hairy GHC internal details relating to name resolution, it feels really quite straightforward. Also, I've recently worked out how to simplify and generalise it somewhat (see [1] and [2] if you're curious).
I'm actually reasonably happy with the design we wound up with, but the need to mangle all the uses of the accessor with a combinator to extract from the data type is a perpetual tax paid, that by giving in and picking a lens type and not having to _also_ provide a normal field accessor we could avoid.
There is a large design space here, and the main thing Nikita's proposal
brings to the table is slightly different assumptions about what such records should mean. This _could_ let us shed some of the rougher edges of ORF, at the price of having to lock in a notion of lenses.
Yes. It's good to explore the options. For what it's worth, I'm sceptical about blessing a particular notion of lenses unless it's really necessary, but I'm prepared to be convinced otherwise.
For users this means the difference between set (foo.bar) 12 and set (le foo.le bar) 12 -- for some combinator that is hard to pick a name for that turns an accessor into a lens. It means they always have to be cognizant of that distinction. The inability to shed that tax in the other design is the major pain point it has always had for me. The user experience for it is / was going to be bad enough that I have remained concerned about how well the adoption for it would be compared to existing approaches, which have more set up but offer cleaner usage.
I'm on the fence about whether it would be a good idea to burden
Nikita's proposal in the same fashion, but I think it'd be wise to explore it in both fashions. My gut feeling though is that if we tie it up with the same restrictions as ORF you just mostly get a less useful ORF with anonymous record sugar thrown in.
I think there's a sensible story to tell about an incremental plan that starts with something like ORF and ends up with something like Nikita's anonymous records. I'll try to tell this story when I can rub a few more braincells together...
I definitely think there is a coherent story there, but I'm not sure I see a way that such a story could end that avoids the concerns above. I very much agree that it makes sense to keep both options on the table though so that we can work through the attendant issues and trade-offs. -Edward

On 22/01/15 05:38, Edward Kmett wrote:
On Wed, Jan 21, 2015 at 4:34 PM, Adam Gundry wrote:
I'm surprised and interested that you view this as a major source of complexity. From my point of view, I quite liked how the ability to overload fields as both selector functions and arbitrary lenses turned out. Compared to some of the hairy GHC internal details relating to name resolution, it feels really quite straightforward. Also, I've recently worked out how to simplify and generalise it somewhat (see [1] and [2] if you're curious).
I'm actually reasonably happy with the design we wound up with, but the need to mangle all the uses of the accessor with a combinator to extract from the data type is a perpetual tax paid, that by giving in and picking a lens type and not having to _also_ provide a normal field accessor we could avoid.
That's a fair point, at least provided one is happy to work with the canonical lens type we choose, because all others will require a combinator. ;-) Actually, the simplifications I recently came up with could allow us to make uses of the field work as van Laarhoven lenses, other lenses *and* selector functions. In practice, however, I suspect this might lead to somewhat confusing error messages, so it might not be desirable. Adam
[1] https://github.com/adamgundry/records-prototype/blob/master/NewPrototype.hs [2] https://github.com/adamgundry/records-prototype/blob/master/CoherentPrototyp...
-- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

On Thu, Jan 22, 2015 at 4:31 AM, Adam Gundry
Actually, the simplifications I recently came up with could allow us to make uses of the field work as van Laarhoven lenses, other lenses *and* selector functions. In practice, however, I suspect this might lead to somewhat confusing error messages, so it might not be desirable.
Interesting. Have you actually tried this with a composition of your simplified form, because I don't see how that can work. When we tried this before we showed that there was a fundamental limitation in the way the functional dependencies had to flow information down the chain, also, "foo.bar.baz" has very different interpretations, between the lens and normal accessors, and both are producing functions, so its hard to see how this doesn't yield overlapping instance hell. -Edward

On 22/01/15 21:07, Edward Kmett wrote:
On Thu, Jan 22, 2015 at 4:31 AM, Adam Gundry
mailto:adam@well-typed.com> wrote: Actually, the simplifications I recently came up with could allow us to make uses of the field work as van Laarhoven lenses, other lenses *and* selector functions. In practice, however, I suspect this might lead to somewhat confusing error messages, so it might not be desirable.
Interesting. Have you actually tried this with a composition of your simplified form, because I don't see how that can work.
When we tried this before we showed that there was a fundamental limitation in the way the functional dependencies had to flow information down the chain, also, "foo.bar.baz" has very different interpretations, between the lens and normal accessors, and both are producing functions, so its hard to see how this doesn't yield overlapping instance hell.
Apparently, composition works fine with both interpretations. Have a look here: https://github.com/adamgundry/records-prototype/blob/master/CoherentPrototyp... https://github.com/adamgundry/records-prototype/blob/master/CoherentPrototyp... The key idea is to define class IsRecordField (n :: Symbol) p where field :: proxy n -> p as the interpretation of fields, and notice that the two instances we want IsRecordField n (r -> t ) IsRecordField n ((a -> f b) -> (s -> f t) do not "morally" overlap, because in the first case r will always be a record datatype, and in the second case it is a function. Thus we can distinguish them using a closed type family. However, now that I look back, I notice that I'm using a suspicious functional dependency that doesn't look as if it should satisfy the liberal coverage condition, even though it is accepted by 7.8.3. So perhaps I'm too optimistic; and in any case, the types involved may be too confusing for use in practice. Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

On Wed, Jan 21, 2015 at 5:48 PM, Simon Marlow
On 21/01/2015 16:01, Johan Tibell wrote:
My thoughts mostly mirror those of Adam and Edward.
1) I want something that is backwards compatible.
Backwards compatible in what sense? Extension flags provide backwards compatibility, because you just don't turn on the extension until you want to use it. That's how all the other extensions work; most of them change syntax in some way or other that breaks existing code.
In this case in the sense of avoiding splitting code into a new-Haskell vs old-Haskell. This means that existing records should work well (and ideally also get the improved name resolution when used in call sites that have the pragma enabled) in the new record system.

On Jan 22, 2015 8:12 PM, "Johan Tibell"
On Wed, Jan 21, 2015 at 5:48 PM, Simon Marlow
wrote: On 21/01/2015 16:01, Johan Tibell wrote:
My thoughts mostly mirror those of Adam and Edward.
1) I want something that is backwards compatible.
Backwards compatible in what sense? Extension flags provide backwards
compatibility, because you just don't turn on the extension until you want to use it. That's how all the other extensions work; most of them change syntax in some way or other that breaks existing code.
In this case in the sense of avoiding splitting code into a new-Haskell
vs old-Haskell. This means that existing records should work well (and ideally also get the improved name resolution when used in call sites that have the pragma enabled) in the new record system.
Sorry to chime in since I am not an expert or ghc contributor, but I can't see how the new record system would break any existing valid Haskell code even if it was added wholesale without a language extension (and without special {|...|} syntax). I can see how expected behavior and error messages would change, but not any existing records or accessors. Would anyone mind explaining what would break? Thank you.

On 23/01/2015 04:12, Johan Tibell wrote:
On Wed, Jan 21, 2015 at 5:48 PM, Simon Marlow
mailto:marlowsd@gmail.com> wrote: On 21/01/2015 16:01, Johan Tibell wrote:
My thoughts mostly mirror those of Adam and Edward.
1) I want something that is backwards compatible.
Backwards compatible in what sense? Extension flags provide backwards compatibility, because you just don't turn on the extension until you want to use it. That's how all the other extensions work; most of them change syntax in some way or other that breaks existing code.
In this case in the sense of avoiding splitting code into a new-Haskell vs old-Haskell. This means that existing records should work well (and ideally also get the improved name resolution when used in call sites that have the pragma enabled) in the new record system.
I understand that position, but it does impose some pretty big constraints, which may mean the design has to make some compromises. It's probably not worth discussing this tradeoff until there's actually a concrete proposal so that we can quantify how much old code would fail to compile and the cost of any compromises. Cheers, Simon

On 23/01/15 10:17, Simon Marlow wrote:
On 23/01/2015 04:12, Johan Tibell wrote:
On Wed, Jan 21, 2015 at 5:48 PM, Simon Marlow
mailto:marlowsd@gmail.com> wrote: On 21/01/2015 16:01, Johan Tibell wrote:
My thoughts mostly mirror those of Adam and Edward.
1) I want something that is backwards compatible.
Backwards compatible in what sense? Extension flags provide backwards compatibility, because you just don't turn on the extension until you want to use it. That's how all the other extensions work; most of them change syntax in some way or other that breaks existing code.
In this case in the sense of avoiding splitting code into a new-Haskell vs old-Haskell. This means that existing records should work well (and ideally also get the improved name resolution when used in call sites that have the pragma enabled) in the new record system.
I understand that position, but it does impose some pretty big constraints, which may mean the design has to make some compromises. It's probably not worth discussing this tradeoff until there's actually a concrete proposal so that we can quantify how much old code would fail to compile and the cost of any compromises.
In this spirit, I've started to prepare a concrete proposal for a revised OverloadedRecordFields design, based on recent feedback: https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesig... This would not necessarily include anonymous records at first, but they do fit nicely as a potential later extension, and it would work well with a slightly amended version of the record library in the meantime. I'd be very interested to hear what you think of this. Also, if someone would be prepared to flesh out a proposal based on the anonymous records idea, that might be a useful point of comparison. Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

Hello,
I just read through Adam's proposal and here is my take:
1. I like the general idea---in particular:
- backwards compatibility is very important to me as I make extensive use
of records in all my code,
- for me, anonymous records are fairly low priority
2. I would propose that we simplify things further, and provide just one
class for overloading:
class Field (name :: Symbol)
rec rec'
field field'
| name rec -> field
, name rec' -> field'
, name rec field' -> rec'
, name rec' field -> rec
where
field :: Functor f => Proxy name -> (field -> f field') ->
(rec -> f rec')
I don't think we need to go into "lenses" at all, the `field` method simply
provides a functorial
update function similar to `mapM`. Of course, one could use the `lens`
library to
get more functionality but this is entirely up to the programmer.
When the ORF extension is enabled, GHC should simply generate an instance
of the class,
in a similar way to what the lens library does.
3. I like the idea of `#x` desugaring into `field (Proxy :: Proxy "x")`,
but I don't like the concrete symbol choice:
- # is a valid operator and a bunch of libraries use it, so it won't be
compatible with existing code.
- @x might be a better choice; then you could write things like:
view @x rec
set @x 3 rec
over @x (+2) rec
- another nice idea (due to Eric Mertens, aka glguy), which allows us to
avoid additional special syntax is as follows:
- instead of using special syntax, reuse the module system
- designate a "magic" module name (e.g., GHC.Records)
- when the renamer sees a name imported from that module, it "resolves"
the name by desugaring it into whatever we want
- For example, if `GHC.Records.x` desugars into `field (Proxy :: Proxy
"x")`, we could write things like this:
import GHC.Records as R
view R.x rec
set R.x 3 rec
over R.x (+2) rec
-Iavor
On Fri, Jan 23, 2015 at 2:25 AM, Adam Gundry
On 23/01/15 10:17, Simon Marlow wrote:
On 23/01/2015 04:12, Johan Tibell wrote:
On Wed, Jan 21, 2015 at 5:48 PM, Simon Marlow
mailto:marlowsd@gmail.com> wrote: On 21/01/2015 16:01, Johan Tibell wrote:
My thoughts mostly mirror those of Adam and Edward.
1) I want something that is backwards compatible.
Backwards compatible in what sense? Extension flags provide backwards compatibility, because you just don't turn on the extension until you want to use it. That's how all the other extensions work; most of them change syntax in some way or other that breaks existing code.
In this case in the sense of avoiding splitting code into a new-Haskell vs old-Haskell. This means that existing records should work well (and ideally also get the improved name resolution when used in call sites that have the pragma enabled) in the new record system.
I understand that position, but it does impose some pretty big constraints, which may mean the design has to make some compromises. It's probably not worth discussing this tradeoff until there's actually a concrete proposal so that we can quantify how much old code would fail to compile and the cost of any compromises.
In this spirit, I've started to prepare a concrete proposal for a revised OverloadedRecordFields design, based on recent feedback:
https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesig...
This would not necessarily include anonymous records at first, but they do fit nicely as a potential later extension, and it would work well with a slightly amended version of the record library in the meantime. I'd be very interested to hear what you think of this.
Also, if someone would be prepared to flesh out a proposal based on the anonymous records idea, that might be a useful point of comparison.
Adam
-- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

Thanks for the feedback, Iavor! On 23/01/15 19:30, Iavor Diatchki wrote:
2. I would propose that we simplify things further, and provide just one class for overloading:
class Field (name :: Symbol) rec rec' field field' | name rec -> field , name rec' -> field' , name rec field' -> rec' , name rec' field -> rec where field :: Functor f => Proxy name -> (field -> f field') -> (rec -> f rec')
I don't think we need to go into "lenses" at all, the `field` method simply provides a functorial update function similar to `mapM`. Of course, one could use the `lens` library to get more functionality but this is entirely up to the programmer.
When the ORF extension is enabled, GHC should simply generate an instance of the class, in a similar way to what the lens library does.
There's something to be said for the simplicity of this approach, provided we're happy to commit to this representation of lenses. I'm attracted to the extra flexibility of the IsRecordField class -- I just noticed that it effectively gives us a syntax for identifier-like Symbol singletons, which could be useful in completely different contexts -- and I'd like to understand the real costs of the additional complexity it imposes.
3. I like the idea of `#x` desugaring into `field (Proxy :: Proxy "x")`, but I don't like the concrete symbol choice: - # is a valid operator and a bunch of libraries use it, so it won't be compatible with existing code.
Ah. I didn't realise that, but assumed it was safe behind -XMagicHash. Yes, that's no good.
- @x might be a better choice; then you could write things like: view @x rec set @x 3 rec over @x (+2) rec
This could work, though it has the downside that we've been informally using @ for explicit type application for a long time! Does anyone know what the status of the proposed ExplicitTypeApplication extension is?
- another nice idea (due to Eric Mertens, aka glguy), which allows us to avoid additional special syntax is as follows: - instead of using special syntax, reuse the module system - designate a "magic" module name (e.g., GHC.Records) - when the renamer sees a name imported from that module, it "resolves" the name by desugaring it into whatever we want - For example, if `GHC.Records.x` desugars into `field (Proxy :: Proxy "x")`, we could write things like this:
import GHC.Records as R
view R.x rec set R.x 3 rec over R.x (+2) rec
Interesting; I think Edward suggested something similar earlier in this thread. Avoiding a special syntax is a definite advantage, but the need for a qualified name makes composing the resulting lenses a bit tiresome (R.x.R.y.R.z or R.x . R.y . R.z). I suppose one could do import GHC.Records (x, y, z) import MyModule hiding (x, y, z) but having to manually hide the selector functions and bring into scope the lenses is also annoying. Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

On Fri, Jan 23, 2015 at 5:06 PM, Adam Gundry
Thanks for the feedback, Iavor!
On 23/01/15 19:30, Iavor Diatchki wrote:
2. I would propose that we simplify things further, and provide just one class for overloading:
class Field (name :: Symbol) rec rec' field field' | name rec -> field , name rec' -> field' , name rec field' -> rec' , name rec' field -> rec where field :: Functor f => Proxy name -> (field -> f field') -> (rec -> f rec')
I don't think we need to go into "lenses" at all, the `field` method simply provides a functorial update function similar to `mapM`. Of course, one could use the `lens` library to get more functionality but this is entirely up to the programmer.
When the ORF extension is enabled, GHC should simply generate an instance of the class, in a similar way to what the lens library does
3. I like the idea of `#x` desugaring into `field (Proxy :: Proxy "x")`,
but I don't like the concrete symbol choice: - # is a valid operator and a bunch of libraries use it, so it won't be compatible with existing code.
Ah. I didn't realise that, but assumed it was safe behind -XMagicHash. Yes, that's no good.
- @x might be a better choice; then you could write things like: view @x rec set @x 3 rec over @x (+2) rec
This could work, though it has the downside that we've been informally using @ for explicit type application for a long time! Does anyone know what the status of the proposed ExplicitTypeApplication extension is?
I'll confess I've been keen on stealing @foo for the purpose of (Proxy :: Proxy foo) or (Proxy :: Proxy "foo") from the type application stuff for a long time -- primarily because I remain rather dubious about how well the type application stuff can work, once you take a type and it goes through a usage/generalization cycle, the order of the types you can "apply" gets all jumbled up, making type application very difficult to actually use. Proxies on the other hand remain stable. I realize that I'm probably on the losing side of that debate, however. But I think it is fair to say that that little bit of dangling syntax will be a bone that is heavily fought over. ;)
- another nice idea (due to Eric Mertens, aka glguy), which allows us
to avoid additional special syntax is as follows: - instead of using special syntax, reuse the module system - designate a "magic" module name (e.g., GHC.Records) - when the renamer sees a name imported from that module, it "resolves" the name by desugaring it into whatever we want - For example, if `GHC.Records.x` desugars into `field (Proxy :: Proxy "x")`, we could write things like this:
import GHC.Records as R
view R.x rec set R.x 3 rec over R.x (+2) rec
Interesting; I think Edward suggested something similar earlier in this thread. Avoiding a special syntax is a definite advantage, but the need for a qualified name makes composing the resulting lenses a bit tiresome (R.x.R.y.R.z or R.x . R.y . R.z). I suppose one could do
import GHC.Records (x, y, z) import MyModule hiding (x, y, z)
but having to manually hide the selector functions and bring into scope the lenses is also annoying.
In the suggestion I made as a (c) option for how to proceed around field names a few posts back in this thread I was hinting towards having an explicit use of {| foo :: x |} somewhere in the module provide an implicit import of import Field (foo) then users can always reference Field.foo explicitly if they don't have such in local scope, and names all share a common source. Of course this was in the context a Nikita style {| ... |} rather than the ORF { .. }. If the Nikita records didn't make an accessor, because there's no way for them to really do so, then there'd be nothing to conflict with. Being able to use import and use them with ORF-style records would just be gravy then. Users would be able to get those out of the box. -Edward

If we only add syntax when the language extension is used then we are not
clobbering everyone. # is not that common of an operator. I would much
rather upset a few people by taking that operator back when they opt-in to
turning the extension on than having a worse records implementation.
On Fri, Jan 23, 2015 at 2:23 PM, Edward Kmett
On Fri, Jan 23, 2015 at 5:06 PM, Adam Gundry
wrote: Thanks for the feedback, Iavor!
On 23/01/15 19:30, Iavor Diatchki wrote:
2. I would propose that we simplify things further, and provide just one class for overloading:
class Field (name :: Symbol) rec rec' field field' | name rec -> field , name rec' -> field' , name rec field' -> rec' , name rec' field -> rec where field :: Functor f => Proxy name -> (field -> f field') -> (rec -> f rec')
I don't think we need to go into "lenses" at all, the `field` method simply provides a functorial update function similar to `mapM`. Of course, one could use the `lens` library to get more functionality but this is entirely up to the programmer.
When the ORF extension is enabled, GHC should simply generate an instance of the class, in a similar way to what the lens library does
3. I like the idea of `#x` desugaring into `field (Proxy :: Proxy "x")`,
but I don't like the concrete symbol choice: - # is a valid operator and a bunch of libraries use it, so it won't be compatible with existing code.
Ah. I didn't realise that, but assumed it was safe behind -XMagicHash. Yes, that's no good.
- @x might be a better choice; then you could write things like: view @x rec set @x 3 rec over @x (+2) rec
This could work, though it has the downside that we've been informally using @ for explicit type application for a long time! Does anyone know what the status of the proposed ExplicitTypeApplication extension is?
I'll confess I've been keen on stealing @foo for the purpose of (Proxy :: Proxy foo) or (Proxy :: Proxy "foo") from the type application stuff for a long time -- primarily because I remain rather dubious about how well the type application stuff can work, once you take a type and it goes through a usage/generalization cycle, the order of the types you can "apply" gets all jumbled up, making type application very difficult to actually use. Proxies on the other hand remain stable. I realize that I'm probably on the losing side of that debate, however. But I think it is fair to say that that little bit of dangling syntax will be a bone that is heavily fought over. ;)
- another nice idea (due to Eric Mertens, aka glguy), which allows us
to avoid additional special syntax is as follows: - instead of using special syntax, reuse the module system - designate a "magic" module name (e.g., GHC.Records) - when the renamer sees a name imported from that module, it "resolves" the name by desugaring it into whatever we want - For example, if `GHC.Records.x` desugars into `field (Proxy :: Proxy "x")`, we could write things like this:
import GHC.Records as R
view R.x rec set R.x 3 rec over R.x (+2) rec
Interesting; I think Edward suggested something similar earlier in this thread. Avoiding a special syntax is a definite advantage, but the need for a qualified name makes composing the resulting lenses a bit tiresome (R.x.R.y.R.z or R.x . R.y . R.z). I suppose one could do
import GHC.Records (x, y, z) import MyModule hiding (x, y, z)
but having to manually hide the selector functions and bring into scope the lenses is also annoying.
In the suggestion I made as a (c) option for how to proceed around field names a few posts back in this thread I was hinting towards having an explicit use of {| foo :: x |} somewhere in the module provide an implicit import of
import Field (foo)
then users can always reference Field.foo explicitly if they don't have such in local scope, and names all share a common source.
Of course this was in the context a Nikita style {| ... |} rather than the ORF { .. }.
If the Nikita records didn't make an accessor, because there's no way for them to really do so, then there'd be nothing to conflict with.
Being able to use import and use them with ORF-style records would just be gravy then. Users would be able to get those out of the box.
-Edward
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

If the level of complaints I received when I stole (#) for use in lens is
any indication, er.. it is in very wide use. It was by far the most
contentious operator I grabbed. ;)
It seems to me that I'd not be in a hurry to both break existing code and
pay a long term syntactic cost when we have options on the table that don't
require either, the "magic Field module" approach that both Eric and I
appear to have arrived at independently side-steps this issue nicely and
appears to result in a better user experience.
Keep in mind, one source of objections to operator-based sigils is that if
you put an sigil at the start of a lens the tax isn't one character but
two, there is a space you now need to avoid (.#) when chaining these
things. "foo.bar" vs. "#foo . #bar" and the latter will always be uglier.
The `import Field (...)` approach results in users never having to pay more
syntactically than with options they have available to them now, and being
class based is even beneficial to folks who don't use Nikita's records.
-Edward
On Fri, Jan 23, 2015 at 5:47 PM, Greg Weber
If we only add syntax when the language extension is used then we are not clobbering everyone. # is not that common of an operator. I would much rather upset a few people by taking that operator back when they opt-in to turning the extension on than having a worse records implementation.
On Fri, Jan 23, 2015 at 2:23 PM, Edward Kmett
wrote: On Fri, Jan 23, 2015 at 5:06 PM, Adam Gundry
wrote: Thanks for the feedback, Iavor!
On 23/01/15 19:30, Iavor Diatchki wrote:
2. I would propose that we simplify things further, and provide just one class for overloading:
class Field (name :: Symbol) rec rec' field field' | name rec -> field , name rec' -> field' , name rec field' -> rec' , name rec' field -> rec where field :: Functor f => Proxy name -> (field -> f field') -> (rec -> f rec')
I don't think we need to go into "lenses" at all, the `field` method simply provides a functorial update function similar to `mapM`. Of course, one could use the `lens` library to get more functionality but this is entirely up to the programmer.
When the ORF extension is enabled, GHC should simply generate an instance of the class, in a similar way to what the lens library does
3. I like the idea of `#x` desugaring into `field (Proxy :: Proxy "x")`,
but I don't like the concrete symbol choice: - # is a valid operator and a bunch of libraries use it, so it won't be compatible with existing code.
Ah. I didn't realise that, but assumed it was safe behind -XMagicHash. Yes, that's no good.
- @x might be a better choice; then you could write things like: view @x rec set @x 3 rec over @x (+2) rec
This could work, though it has the downside that we've been informally using @ for explicit type application for a long time! Does anyone know what the status of the proposed ExplicitTypeApplication extension is?
I'll confess I've been keen on stealing @foo for the purpose of (Proxy :: Proxy foo) or (Proxy :: Proxy "foo") from the type application stuff for a long time -- primarily because I remain rather dubious about how well the type application stuff can work, once you take a type and it goes through a usage/generalization cycle, the order of the types you can "apply" gets all jumbled up, making type application very difficult to actually use. Proxies on the other hand remain stable. I realize that I'm probably on the losing side of that debate, however. But I think it is fair to say that that little bit of dangling syntax will be a bone that is heavily fought over. ;)
- another nice idea (due to Eric Mertens, aka glguy), which allows us
to avoid additional special syntax is as follows: - instead of using special syntax, reuse the module system - designate a "magic" module name (e.g., GHC.Records) - when the renamer sees a name imported from that module, it "resolves" the name by desugaring it into whatever we want - For example, if `GHC.Records.x` desugars into `field (Proxy :: Proxy "x")`, we could write things like this:
import GHC.Records as R
view R.x rec set R.x 3 rec over R.x (+2) rec
Interesting; I think Edward suggested something similar earlier in this thread. Avoiding a special syntax is a definite advantage, but the need for a qualified name makes composing the resulting lenses a bit tiresome (R.x.R.y.R.z or R.x . R.y . R.z). I suppose one could do
import GHC.Records (x, y, z) import MyModule hiding (x, y, z)
but having to manually hide the selector functions and bring into scope the lenses is also annoying.
In the suggestion I made as a (c) option for how to proceed around field names a few posts back in this thread I was hinting towards having an explicit use of {| foo :: x |} somewhere in the module provide an implicit import of
import Field (foo)
then users can always reference Field.foo explicitly if they don't have such in local scope, and names all share a common source.
Of course this was in the context a Nikita style {| ... |} rather than the ORF { .. }.
If the Nikita records didn't make an accessor, because there's no way for them to really do so, then there'd be nothing to conflict with.
Being able to use import and use them with ORF-style records would just be gravy then. Users would be able to get those out of the box.
-Edward
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

May I suggest something for a syntax (as an option, sorry if it's silly or
not related)? I really don't like neither "@" or "#" because they seem too
hacky, meanwhile GHC already has an "accessor" syntax with braces { and },
so, might it be an option to have something like:
```
data Foo = Foo { val :: Int }
data Bar = Bar { foo :: Foo }
main = do
let bar = Bar (Foo 10)
print bar{foo{val}}
let bar' = bar{foo{val}=10}
return ()
```
I think this syntax is 100% understandable for a "newbie". Not sure how is
it related to lenses though.
What do you think?
If the level of complaints I received when I stole (#) for use in lens is
any indication, er.. it is in very wide use. It was by far the most
contentious operator I grabbed. ;)
It seems to me that I'd not be in a hurry to both break existing code and
pay a long term syntactic cost when we have options on the table that don't
require either, the "magic Field module" approach that both Eric and I
appear to have arrived at independently side-steps this issue nicely and
appears to result in a better user experience.
Keep in mind, one source of objections to operator-based sigils is that if
you put an sigil at the start of a lens the tax isn't one character but
two, there is a space you now need to avoid (.#) when chaining these
things. "foo.bar" vs. "#foo . #bar" and the latter will always be uglier.
The `import Field (...)` approach results in users never having to pay more
syntactically than with options they have available to them now, and being
class based is even beneficial to folks who don't use Nikita's records.
-Edward
On Fri, Jan 23, 2015 at 5:47 PM, Greg Weber
If we only add syntax when the language extension is used then we are not clobbering everyone. # is not that common of an operator. I would much rather upset a few people by taking that operator back when they opt-in to turning the extension on than having a worse records implementation.
On Fri, Jan 23, 2015 at 2:23 PM, Edward Kmett
wrote: On Fri, Jan 23, 2015 at 5:06 PM, Adam Gundry
wrote: Thanks for the feedback, Iavor!
On 23/01/15 19:30, Iavor Diatchki wrote:
2. I would propose that we simplify things further, and provide just one class for overloading:
class Field (name :: Symbol) rec rec' field field' | name rec -> field , name rec' -> field' , name rec field' -> rec' , name rec' field -> rec where field :: Functor f => Proxy name -> (field -> f field') -> (rec -> f rec')
I don't think we need to go into "lenses" at all, the `field` method simply provides a functorial update function similar to `mapM`. Of course, one could use the `lens` library to get more functionality but this is entirely up to the programmer.
When the ORF extension is enabled, GHC should simply generate an instance of the class, in a similar way to what the lens library does
3. I like the idea of `#x` desugaring into `field (Proxy :: Proxy "x")`,
but I don't like the concrete symbol choice: - # is a valid operator and a bunch of libraries use it, so it won't be compatible with existing code.
Ah. I didn't realise that, but assumed it was safe behind -XMagicHash. Yes, that's no good.
- @x might be a better choice; then you could write things like: view @x rec set @x 3 rec over @x (+2) rec
This could work, though it has the downside that we've been informally using @ for explicit type application for a long time! Does anyone know what the status of the proposed ExplicitTypeApplication extension is?
I'll confess I've been keen on stealing @foo for the purpose of (Proxy :: Proxy foo) or (Proxy :: Proxy "foo") from the type application stuff for a long time -- primarily because I remain rather dubious about how well the type application stuff can work, once you take a type and it goes through a usage/generalization cycle, the order of the types you can "apply" gets all jumbled up, making type application very difficult to actually use. Proxies on the other hand remain stable. I realize that I'm probably on the losing side of that debate, however. But I think it is fair to say that that little bit of dangling syntax will be a bone that is heavily fought over. ;)
- another nice idea (due to Eric Mertens, aka glguy), which allows us
to avoid additional special syntax is as follows: - instead of using special syntax, reuse the module system - designate a "magic" module name (e.g., GHC.Records) - when the renamer sees a name imported from that module, it "resolves" the name by desugaring it into whatever we want - For example, if `GHC.Records.x` desugars into `field (Proxy :: Proxy "x")`, we could write things like this:
import GHC.Records as R
view R.x rec set R.x 3 rec over R.x (+2) rec
Interesting; I think Edward suggested something similar earlier in this thread. Avoiding a special syntax is a definite advantage, but the need for a qualified name makes composing the resulting lenses a bit tiresome (R.x.R.y.R.z or R.x . R.y . R.z). I suppose one could do
import GHC.Records (x, y, z) import MyModule hiding (x, y, z)
but having to manually hide the selector functions and bring into scope the lenses is also annoying.
In the suggestion I made as a (c) option for how to proceed around field names a few posts back in this thread I was hinting towards having an explicit use of {| foo :: x |} somewhere in the module provide an implicit import of
import Field (foo)
then users can always reference Field.foo explicitly if they don't have such in local scope, and names all share a common source.
Of course this was in the context a Nikita style {| ... |} rather than the ORF { .. }.
If the Nikita records didn't make an accessor, because there's no way for them to really do so, then there'd be nothing to conflict with.
Being able to use import and use them with ORF-style records would just be gravy then. Users would be able to get those out of the box.
-Edward
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

Hi Konstantine,
let bar' = bar{foo{val}=10}
If you're inside a record context you might just have something like: let bar' = bar { foo.val = 10 } and let val = bar { foo.val } or even let bar' = bar { foo.val %= someFunction } This just seems to be some kind of syntactic sugar, so it's most likely less powerful than real lenses. Greetings, Daniel

Hi Konstantine, On 24/01/15 11:55, Konstantine Rybnikov wrote:
May I suggest something for a syntax (as an option, sorry if it's silly or not related)? I really don't like neither "@" or "#" because they seem too hacky, meanwhile GHC already has an "accessor" syntax with braces { and }, so, might it be an option to have something like:
``` data Foo = Foo { val :: Int } data Bar = Bar { foo :: Foo }
main = do let bar = Bar (Foo 10) print bar{foo{val}} let bar' = bar{foo{val}=10} return ()
```
I think this syntax is 100% understandable for a "newbie". Not sure how is it related to lenses though.
What do you think?
Thanks for thinking about this problem (we certainly need fresh ideas!) but unfortunately this syntax is already taken by NamedFieldPuns, which interprets foo{val} ==> foo{val = val} so I don't think we can easily use it. I'm still keen to find a better solution than # or magic imports, though! Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

| I just | noticed that it effectively gives us a syntax for identifier-like Symbol | singletons, which could be useful in completely different contexts Indeed so. I have written a major increment to https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesig... which people reading this thread may find interesting. Look for "Plan B". For the first time I think I can see a nice, simple, elegant, orthogonal story. Simon

I really like this proposal (except I would bike shed about the syntax for
record selector to be dot, like in the majority of languages.)
On Fri, Jan 23, 2015 at 3:41 PM, Simon Peyton Jones
| I just | noticed that it effectively gives us a syntax for identifier-like Symbol | singletons, which could be useful in completely different contexts
Indeed so. I have written a major increment to
https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesig... which people reading this thread may find interesting. Look for "Plan B".
For the first time I think I can see a nice, simple, elegant, orthogonal story.
Simon _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

Hi All,
I fixed a missing "x" in one of the instances. I like the proposal,
mostly because it has nothing to do with records, leaving people to
experiment with records in libraries.
I'm not keen on the use of Template Haskell to define lenses, and the
fact that all base libraries are going to need custom makeLens
definitions set apart from their definitions, plus IV is rather
"wired" into record selectors, which can't be undone later. I think we
can fix some of that by desugaring record definitions to:
data T = MkT {x :: Int}
instance FieldSelector "T" T Int where
fieldSelector (MkT x) = x
Then someone can, in a library, define:
instance FieldSelector x r a => IV x (r -> a) where
iv = fieldSelector
Now that records don't mention IV, we are free to provide lots of
different instances, each capturing some properties of each field,
without committing to any one style of lens at this point. Therefore,
we could have record desugaring also produce:
instance FieldSetter "T" T Int where
fieldSet v (T _) = T v
And also:
instance FieldSTAB "T" T Int where
fieldSTAB = ... the stab lens ...
(As we find new interesting types of operations over a field, with
different levels of polymorphism etc, we can keep adding new ones
without breaking compatibility, and most users won't care. Prototyping
new ones in Template Haskell is still probably a good idea.) Now
someone can define, in a record library:
instance (FieldSelector x r a, FieldSetter x r a) => IV x (Lens r a) where
iv = makeLens fieldSelector fieldSet
Or, for people who want #x to be a STAB lens directly (without a Lens
type wrapper), they can omit the IV x (r -> a) instance, and only have
#x have instances producing the STAB lens.
The one downside of this plan is orphan instances, which means if you
are writing a library and use one type of IV declaration (the selector
one), then anyone else building on your library won't be able to use a
different type of IV (the stab one). One potential way to fix that is
to parameterise IV, so you can say (warning, even more half-baked
thoughts ahead):
{-# LANGUAGE ImplicitValues=MyType #-}
Where MyType is a type I've defined in one of my imports, and then
desugar #x to:
iv @ "x" @ MyType @ alpha
And extend IV with an extra type parameter. Now all the Lens library
IV instances can include LensType, and people can mix and match
different record schemes in different modules.
Separately, the pattern: data T = ...; $(makeLens 'T) crops up a lot,
and is gently ugly. I wonder if there should be an extension that
let's you write: data T = ... deriving ('makeLens), or even just
deriving (Lens) which desugars to the same thing?
Thanks, Neil
On Sat, Jan 24, 2015 at 1:04 AM, Johan Tibell
I really like this proposal (except I would bike shed about the syntax for record selector to be dot, like in the majority of languages.)
On Fri, Jan 23, 2015 at 3:41 PM, Simon Peyton Jones
wrote: | I just | noticed that it effectively gives us a syntax for identifier-like Symbol | singletons, which could be useful in completely different contexts
Indeed so. I have written a major increment to
https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesig... which people reading this thread may find interesting. Look for "Plan B".
For the first time I think I can see a nice, simple, elegant, orthogonal story.
Simon _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

Thanks, Simon, I think we're starting to find a nice story indeed. Your implicit values idea is what I was starting to grope towards with IsRecordField, but I hadn't spotted the similarity to implicit parameters. Like Neil, I still think it's worth separating the magically-solved typeclass that describes when a field belongs to a record (aka HasField) from the typeclass that explains how to interpret the #x syntax (aka IsRecordField/IV). In particular, with your plan the composition of two implicit values ends up having an ambiguous type: #foo . #bar :: (IV "foo" (b -> c), IV "bar" (a -> b)) => a -> c We need to be able to express that the codomain is functionally dependent on the domain (or vice versa, for van Laarhoven lenses), which I think entails having some instance ... => IV n (r -> a). Moreover, seeing as we're so close, it would be nice if we come out of this with a way to get lenses "for free" from fields, rather than needing TH. All this brings us back to the question of which instance(s) to have for (->), if any. I think Neil's suggestion for avoiding orphans is feasible... in fact, I believe the only real conflict is between instance IV n (r -> a ) instance IV n ((a -> f b) -> s -> f t) so we could probably just have two extensions to fix an additional parameter at one of two values. Although I'm not keen on the aesthetics. Perhaps we should just vote on whether preferential treatment is given to selectors or lenses, and pick one? Adam On 24/01/15 07:45, Neil Mitchell wrote:
Hi All,
I fixed a missing "x" in one of the instances. I like the proposal, mostly because it has nothing to do with records, leaving people to experiment with records in libraries.
I'm not keen on the use of Template Haskell to define lenses, and the fact that all base libraries are going to need custom makeLens definitions set apart from their definitions, plus IV is rather "wired" into record selectors, which can't be undone later. I think we can fix some of that by desugaring record definitions to:
data T = MkT {x :: Int}
instance FieldSelector "T" T Int where fieldSelector (MkT x) = x
Then someone can, in a library, define:
instance FieldSelector x r a => IV x (r -> a) where iv = fieldSelector
Now that records don't mention IV, we are free to provide lots of different instances, each capturing some properties of each field, without committing to any one style of lens at this point. Therefore, we could have record desugaring also produce:
instance FieldSetter "T" T Int where fieldSet v (T _) = T v
And also:
instance FieldSTAB "T" T Int where fieldSTAB = ... the stab lens ...
(As we find new interesting types of operations over a field, with different levels of polymorphism etc, we can keep adding new ones without breaking compatibility, and most users won't care. Prototyping new ones in Template Haskell is still probably a good idea.) Now someone can define, in a record library:
instance (FieldSelector x r a, FieldSetter x r a) => IV x (Lens r a) where iv = makeLens fieldSelector fieldSet
Or, for people who want #x to be a STAB lens directly (without a Lens type wrapper), they can omit the IV x (r -> a) instance, and only have #x have instances producing the STAB lens.
The one downside of this plan is orphan instances, which means if you are writing a library and use one type of IV declaration (the selector one), then anyone else building on your library won't be able to use a different type of IV (the stab one). One potential way to fix that is to parameterise IV, so you can say (warning, even more half-baked thoughts ahead):
{-# LANGUAGE ImplicitValues=MyType #-}
Where MyType is a type I've defined in one of my imports, and then desugar #x to:
iv @ "x" @ MyType @ alpha
And extend IV with an extra type parameter. Now all the Lens library IV instances can include LensType, and people can mix and match different record schemes in different modules.
Separately, the pattern: data T = ...; $(makeLens 'T) crops up a lot, and is gently ugly. I wonder if there should be an extension that let's you write: data T = ... deriving ('makeLens), or even just deriving (Lens) which desugars to the same thing?
Thanks, Neil
-- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

| "wired" into record selectors, which can't be undone later. I think we | can fix some of that by desugaring record definitions to: | | data T = MkT {x :: Int} | | instance FieldSelector "T" T Int where | fieldSelector (MkT x) = x | | Then someone can, in a library, define: | | instance FieldSelector x r a => IV x (r -> a) where | iv = fieldSelector | | Now that records don't mention IV, we are free to provide lots of | different instances, each capturing some properties of each field, | without committing to any one style of lens at this point. Therefore, | we could have record desugaring also produce: | | instance FieldSetter "T" T Int where | fieldSet v (T _) = T v | | And also: | | instance FieldSTAB "T" T Int where | fieldSTAB = ... the stab lens ... OK, I buy this. We generate FieldSelector instances where possible, and FieldSetter instances where possible (fewer cases). Fine. Cutting to the chase, if we are beginning to converge, could someone (Adam, Neil?) modify the Redesign page https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesig... to focus on plan B only; and add this FieldGetter/Setter stuff? It's confusing when we have too many things in play. I'm sick at the moment, so I'm going home to bed -- hence handing off in a hopeful way to you two. I have added Edwards "import Field(x)" suggestion under syntax, although I don't really like it. One last thing: Edward, could you live with lenses coming from #x being of a newtype (Lens a b), or stab variant, rather than actually being a higher rank function etc? Of course lens composition would no longer be function composition, but that might not be so terrible; ".." perhaps. It would make error messages vastly more perspicuous. And, much as I love lenses, I think it's a mistake not to abstraction; it dramatically limits your future wiggle room. I really think we are finally converging. Simon

Personally, I don't like the sigil mangled version at all.
If it is then further encumbered by a combinator it is now several symbols
longer at every single use site than other alternatives put forth in this
thread. =(
xx #bar . xx #baz
or
xx @bar . xx @baz
compares badly enough against
bar.baz
for some as yet unnamed combinator xx and is a big enough tax for all users
to unavoidably pay that I fear it would greatly hinder adoption.
The former also has the disadvantage of stealing an operator that is
already in wide use.
Even assuming the fixity issues can be worked out for some other set of
operators to glue these tother we're still looking at
x^!? #bar!? #baz
vs.
x^.bar.baz
with another set of arcane rules to switch back and forth out of this to
deal with the lenses/traversals/prisms/etc that many folks have in their
code today.
It is something like 3 extra sets of symbols to memorize plus a tax of 3
characters per lens use site.
I know that I for one would hesitate to throw over my template haskell
generated lenses for something that was noisier at every use site. For all
that lenses are complex internally, they are a lot less arbitrary than that.
The import Field trick is magic, yes, but it has the benefit of being the
first approach I've seen where the resulting syntax can be as light as what
the user can generate by hand today.
-Edward
On Mon, Jan 26, 2015 at 8:50 AM, Simon Peyton Jones
| "wired" into record selectors, which can't be undone later. I think we | can fix some of that by desugaring record definitions to: | | data T = MkT {x :: Int} | | instance FieldSelector "T" T Int where | fieldSelector (MkT x) = x | | Then someone can, in a library, define: | | instance FieldSelector x r a => IV x (r -> a) where | iv = fieldSelector | | Now that records don't mention IV, we are free to provide lots of | different instances, each capturing some properties of each field, | without committing to any one style of lens at this point. Therefore, | we could have record desugaring also produce: | | instance FieldSetter "T" T Int where | fieldSet v (T _) = T v | | And also: | | instance FieldSTAB "T" T Int where | fieldSTAB = ... the stab lens ...
OK, I buy this.
We generate FieldSelector instances where possible, and FieldSetter instances where possible (fewer cases).
Fine.
Cutting to the chase, if we are beginning to converge, could someone (Adam, Neil?) modify the Redesign page https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesig... to focus on plan B only; and add this FieldGetter/Setter stuff?
It's confusing when we have too many things in play. I'm sick at the moment, so I'm going home to bed -- hence handing off in a hopeful way to you two.
I have added Edwards "import Field(x)" suggestion under syntax, although I don't really like it.
One last thing: Edward, could you live with lenses coming from #x being of a newtype (Lens a b), or stab variant, rather than actually being a higher rank function etc? Of course lens composition would no longer be function composition, but that might not be so terrible; ".." perhaps. It would make error messages vastly more perspicuous. And, much as I love lenses, I think it's a mistake not to abstraction; it dramatically limits your future wiggle room.
I really think we are finally converging.
Simon _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

Edward, I think the point that Simon was making was that sweetened
expressions like `#bar` could directly instantiate lenses if only Lens was
a distinct type, instead of an alias to a function. I.e., it would be `#bar
. #baz`.
Introducing such a change would, of course, exclude the possibility of
making "lens"-compatible libraries without depending on "lens", like I did
with "record". However the most basic functionality of "lens" could be
extracted into a separate library with a minimum of transitive
dependencies, then the reasons for not depending on it would simply
dissolve.
2015-01-26 23:22 GMT+03:00 Edward Kmett
Personally, I don't like the sigil mangled version at all.
If it is then further encumbered by a combinator it is now several symbols longer at every single use site than other alternatives put forth in this thread. =(
xx #bar . xx #baz
or
xx @bar . xx @baz
compares badly enough against
bar.baz
for some as yet unnamed combinator xx and is a big enough tax for all users to unavoidably pay that I fear it would greatly hinder adoption.
The former also has the disadvantage of stealing an operator that is already in wide use.
Even assuming the fixity issues can be worked out for some other set of operators to glue these tother we're still looking at
x^!? #bar!? #baz
vs.
x^.bar.baz
with another set of arcane rules to switch back and forth out of this to deal with the lenses/traversals/prisms/etc that many folks have in their code today.
It is something like 3 extra sets of symbols to memorize plus a tax of 3 characters per lens use site.
I know that I for one would hesitate to throw over my template haskell generated lenses for something that was noisier at every use site. For all that lenses are complex internally, they are a lot less arbitrary than that.
The import Field trick is magic, yes, but it has the benefit of being the first approach I've seen where the resulting syntax can be as light as what the user can generate by hand today.
-Edward
On Mon, Jan 26, 2015 at 8:50 AM, Simon Peyton Jones
wrote:
| "wired" into record selectors, which can't be undone later. I think we | can fix some of that by desugaring record definitions to: | | data T = MkT {x :: Int} | | instance FieldSelector "T" T Int where | fieldSelector (MkT x) = x | | Then someone can, in a library, define: | | instance FieldSelector x r a => IV x (r -> a) where | iv = fieldSelector | | Now that records don't mention IV, we are free to provide lots of | different instances, each capturing some properties of each field, | without committing to any one style of lens at this point. Therefore, | we could have record desugaring also produce: | | instance FieldSetter "T" T Int where | fieldSet v (T _) = T v | | And also: | | instance FieldSTAB "T" T Int where | fieldSTAB = ... the stab lens ...
OK, I buy this.
We generate FieldSelector instances where possible, and FieldSetter instances where possible (fewer cases).
Fine.
Cutting to the chase, if we are beginning to converge, could someone (Adam, Neil?) modify the Redesign page https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesig... to focus on plan B only; and add this FieldGetter/Setter stuff?
It's confusing when we have too many things in play. I'm sick at the moment, so I'm going home to bed -- hence handing off in a hopeful way to you two.
I have added Edwards "import Field(x)" suggestion under syntax, although I don't really like it.
One last thing: Edward, could you live with lenses coming from #x being of a newtype (Lens a b), or stab variant, rather than actually being a higher rank function etc? Of course lens composition would no longer be function composition, but that might not be so terrible; ".." perhaps. It would make error messages vastly more perspicuous. And, much as I love lenses, I think it's a mistake not to abstraction; it dramatically limits your future wiggle room.
I really think we are finally converging.
Simon _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

Hello everyone!
I of course will be honoured to participate in the work on anonymous
records, if hopefully we get to it. However as I've seen there's been some
debate going on and confusion about whether they are at all worth
implementing. That is why I am writing an article titled “Why anonymous
records matter”, which I plan to publish this week.
Meanwhile I've updated "record" to support type-changing updates. Adam’s
proposal unfortunately wouldn’t work for anonymous records due to type
family instance conflicts. What I did was implement something in the spirit
of a Functional Dependencies based solution for tuples from Edward's "lens"
library. Thanks for inspiration!
Also I’ve decided to redo the “record” project as a full-blown preprocessor
with full Haskell syntax support using hacks around “haskell-src-exts”, as was
once suggested
http://www.reddit.com/r/haskell/comments/2svayz/i_think_ive_nailed_it_ive_so...
by
Chris Done on Reddit. I’m planning to stick to Simon’s insightful proposal
concerning Implicit Values as much as possible. Hopefully this all will
turn the project into a play ground for a GHC extension, which people will
be able to use with existing GHC releases. However I’m only beginning to
work on this.
Also, as Adam’s already aware, I am exploring anonymous tagged unions
http://www.reddit.com/r/haskell/comments/2svayz/i_think_ive_nailed_it_ive_so....
Turns out they share quite a lot of properties with anonymous records and
in a similar style solve the constructor namespacing issue. Looks like
Implicit Values could come in handy for this as well.
Best regards,
Nikita
2015-01-27 0:18 GMT+03:00 Nikita Volkov
Edward, I think the point that Simon was making was that sweetened expressions like `#bar` could directly instantiate lenses if only Lens was a distinct type, instead of an alias to a function. I.e., it would be `#bar . #baz`.
Introducing such a change would, of course, exclude the possibility of making "lens"-compatible libraries without depending on "lens", like I did with "record". However the most basic functionality of "lens" could be extracted into a separate library with a minimum of transitive dependencies, then the reasons for not depending on it would simply dissolve.
2015-01-26 23:22 GMT+03:00 Edward Kmett
: Personally, I don't like the sigil mangled version at all.
If it is then further encumbered by a combinator it is now several symbols longer at every single use site than other alternatives put forth in this thread. =(
xx #bar . xx #baz
or
xx @bar . xx @baz
compares badly enough against
bar.baz
for some as yet unnamed combinator xx and is a big enough tax for all users to unavoidably pay that I fear it would greatly hinder adoption.
The former also has the disadvantage of stealing an operator that is already in wide use.
Even assuming the fixity issues can be worked out for some other set of operators to glue these tother we're still looking at
x^!? #bar!? #baz
vs.
x^.bar.baz
with another set of arcane rules to switch back and forth out of this to deal with the lenses/traversals/prisms/etc that many folks have in their code today.
It is something like 3 extra sets of symbols to memorize plus a tax of 3 characters per lens use site.
I know that I for one would hesitate to throw over my template haskell generated lenses for something that was noisier at every use site. For all that lenses are complex internally, they are a lot less arbitrary than that.
The import Field trick is magic, yes, but it has the benefit of being the first approach I've seen where the resulting syntax can be as light as what the user can generate by hand today.
-Edward
On Mon, Jan 26, 2015 at 8:50 AM, Simon Peyton Jones < simonpj@microsoft.com> wrote:
| "wired" into record selectors, which can't be undone later. I think we | can fix some of that by desugaring record definitions to: | | data T = MkT {x :: Int} | | instance FieldSelector "T" T Int where | fieldSelector (MkT x) = x | | Then someone can, in a library, define: | | instance FieldSelector x r a => IV x (r -> a) where | iv = fieldSelector | | Now that records don't mention IV, we are free to provide lots of | different instances, each capturing some properties of each field, | without committing to any one style of lens at this point. Therefore, | we could have record desugaring also produce: | | instance FieldSetter "T" T Int where | fieldSet v (T _) = T v | | And also: | | instance FieldSTAB "T" T Int where | fieldSTAB = ... the stab lens ...
OK, I buy this.
We generate FieldSelector instances where possible, and FieldSetter instances where possible (fewer cases).
Fine.
Cutting to the chase, if we are beginning to converge, could someone (Adam, Neil?) modify the Redesign page https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesig... to focus on plan B only; and add this FieldGetter/Setter stuff?
It's confusing when we have too many things in play. I'm sick at the moment, so I'm going home to bed -- hence handing off in a hopeful way to you two.
I have added Edwards "import Field(x)" suggestion under syntax, although I don't really like it.
One last thing: Edward, could you live with lenses coming from #x being of a newtype (Lens a b), or stab variant, rather than actually being a higher rank function etc? Of course lens composition would no longer be function composition, but that might not be so terrible; ".." perhaps. It would make error messages vastly more perspicuous. And, much as I love lenses, I think it's a mistake not to abstraction; it dramatically limits your future wiggle room.
I really think we are finally converging.
Simon _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

On Mon, Jan 26, 2015 at 4:18 PM, Nikita Volkov
I.e., it would be `#bar . #baz`.
Note: once you start using a data-type then (.) necessarily fails to allow you to ever do type changing assignment, due to the shape of Category, or you have to use yet another operator, so that snippet cannot work without giving up something we can do today. OTOH: Using the lens-style story, no types are needed here that isn't already available in base and, done right, no existing operators need be stolen from the user, and type changing assignment is trivial. I'm confess, I, like many in this thread, am less than comfortable with the notion of bringing chunks of lens into base. Frankly, I'd casually dismissed such concerns as a job for Haskell 2025. ;) However, I've been trying to consider it with an open mind here, because the alternatives proposed thus far lock in uglier code than the status quo with more limitations while simultaneously being harder to explain. -Edward Introducing such a change would, of course, exclude the possibility of
making "lens"-compatible libraries without depending on "lens", like I did with "record". However the most basic functionality of "lens" could be extracted into a separate library with a minimum of transitive dependencies, then the reasons for not depending on it would simply dissolve.
2015-01-26 23:22 GMT+03:00 Edward Kmett
: Personally, I don't like the sigil mangled version at all.
If it is then further encumbered by a combinator it is now several symbols longer at every single use site than other alternatives put forth in this thread. =(
xx #bar . xx #baz
or
xx @bar . xx @baz
compares badly enough against
bar.baz
for some as yet unnamed combinator xx and is a big enough tax for all users to unavoidably pay that I fear it would greatly hinder adoption.
The former also has the disadvantage of stealing an operator that is already in wide use.
Even assuming the fixity issues can be worked out for some other set of operators to glue these tother we're still looking at
x^!? #bar!? #baz
vs.
x^.bar.baz
with another set of arcane rules to switch back and forth out of this to deal with the lenses/traversals/prisms/etc that many folks have in their code today.
It is something like 3 extra sets of symbols to memorize plus a tax of 3 characters per lens use site.
I know that I for one would hesitate to throw over my template haskell generated lenses for something that was noisier at every use site. For all that lenses are complex internally, they are a lot less arbitrary than that.
The import Field trick is magic, yes, but it has the benefit of being the first approach I've seen where the resulting syntax can be as light as what the user can generate by hand today.
-Edward
On Mon, Jan 26, 2015 at 8:50 AM, Simon Peyton Jones < simonpj@microsoft.com> wrote:
| "wired" into record selectors, which can't be undone later. I think we | can fix some of that by desugaring record definitions to: | | data T = MkT {x :: Int} | | instance FieldSelector "T" T Int where | fieldSelector (MkT x) = x | | Then someone can, in a library, define: | | instance FieldSelector x r a => IV x (r -> a) where | iv = fieldSelector | | Now that records don't mention IV, we are free to provide lots of | different instances, each capturing some properties of each field, | without committing to any one style of lens at this point. Therefore, | we could have record desugaring also produce: | | instance FieldSetter "T" T Int where | fieldSet v (T _) = T v | | And also: | | instance FieldSTAB "T" T Int where | fieldSTAB = ... the stab lens ...
OK, I buy this.
We generate FieldSelector instances where possible, and FieldSetter instances where possible (fewer cases).
Fine.
Cutting to the chase, if we are beginning to converge, could someone (Adam, Neil?) modify the Redesign page https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesig... to focus on plan B only; and add this FieldGetter/Setter stuff?
It's confusing when we have too many things in play. I'm sick at the moment, so I'm going home to bed -- hence handing off in a hopeful way to you two.
I have added Edwards "import Field(x)" suggestion under syntax, although I don't really like it.
One last thing: Edward, could you live with lenses coming from #x being of a newtype (Lens a b), or stab variant, rather than actually being a higher rank function etc? Of course lens composition would no longer be function composition, but that might not be so terrible; ".." perhaps. It would make error messages vastly more perspicuous. And, much as I love lenses, I think it's a mistake not to abstraction; it dramatically limits your future wiggle room.
I really think we are finally converging.
Simon _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

Personally, I don't like the sigil mangled version at all. You don’t comment on the relationship with implicit parameters. Are they ok with sigils? If it is then further encumbered by a combinator it is now several symbols longer at every single use site than other alternatives put forth in this thread. =( No, as Nikita says, under the “Redesign” proposal it would be #bar . #baz The import Field trick is magic, yes, but it has the benefit of being the first approach I've seen where the resulting syntax can be as light as what the user can generate by hand today. That’s why I added it to the “Redesign” page. It seems viable to me; a clever idea, thank you. Still, personally I prefer #x because of the link with implicit parameters. It would be interesting to know what others think. I'm confess, I, like many in this thread, am less than comfortable with the notion of bringing chunks of lens into base. Frankly, I'd casually dismissed such concerns as a job for Haskell 2025. ;) However, I've been trying to consider it with an open mind here, because the alternatives proposed thus far lock in uglier code than the status quo with more limitations while simultaneously being harder to explain. I don’t think anyone is suggesting adding any of lens are they? Which bits did you think were being suggested for addition? Note: once you start using a data-type then (.) necessarily fails to allow you to ever do type changing assignment, due to the shape of Category, or you have to use yet another operator, so that snippet cannot work without giving up something we can do today. OTOH: Using the lens-style story, no types are needed here that isn't already available in base and, done right, no existing operators need be stolen from the user, and type changing assignment is trivial. I’m afraid I couldn’t understand this paragraph at all. Perhaps some examples would help, to illustrate what you mean? Simon

On Mon, Jan 26, 2015 at 4:41 PM, Simon Peyton Jones
Personally, I don't like the sigil mangled version at all.
You don’t comment on the relationship with implicit parameters. Are they ok with sigils?
I don't have too many opinions about implicit parameters, but they don't really see a lot of use, which makes me somewhat leery of copying the pattern. ;) If it is then further encumbered by a combinator it is now several
symbols longer at every single use site than other alternatives put forth in this thread. =(
No, as Nikita says, under the “Redesign” proposal it would be #bar . #baz
The problem is that if you make #bar an instance of Category so that it can use (.) then it will fail to allow type changing re-assignment.
The import Field trick is magic, yes, but it has the benefit of being the first approach I've seen where the resulting syntax can be as light as what the user can generate by hand today.
That’s why I added it to the “Redesign” page. It seems viable to me; a clever idea, thank you. Still, personally I prefer #x because of the link with implicit parameters. It would be interesting to know what others think.
Admittedly @bar . @baz has the benefit that it introduces no namespacing conflicts at all. If we really had to go with some kind of sigil based solution, I _could_ rally behind that one, but I do like it a lot less than the import trick, if only because the import trick gets rid of that last '@' and space we have on every accessor and you have to admit that the existing foo^.bar.baz.quux idiom reads a lot more cleanly than foo ^. @bar . @baz . @quux ever can. (I used @foo above because it avoids any potential conflict with existing user code as @ isn't a legal operator) I'm confess, I, like many in this thread, am less than comfortable with
the notion of bringing chunks of lens into base. Frankly, I'd casually dismissed such concerns as a job for Haskell 2025. ;) However, I've been trying to consider it with an open mind here, because the alternatives proposed thus far lock in uglier code than the status quo with more limitations while simultaneously being harder to explain.
I don’t think anyone is suggesting adding any of lens are they? Which bits did you think were being suggested for addition?
I was mostly referring to the use of the (a -> f b) -> s -> f t form.
Note: once you start using a data-type then (.) necessarily fails to allow you to ever do type changing assignment, due to the shape of Category, or you have to use yet another operator, so that snippet cannot work without giving up something we can do today. OTOH: Using the lens-style story, no types are needed here that isn't already available in base and, done right, no existing operators need be stolen from the user, and type changing assignment is trivial.
I’m afraid I couldn’t understand this paragraph at all. Perhaps some examples would help, to illustrate
what you mean?
I was writing that paragraph in response to your query if it'd make sense to have the @foo return some data type: It comes at a rather high cost. Lens gets away with using (.) to compose because its really using functions, with a funny mapM-like shape (a -> f b) -> (s -> f t) is still a function on the outside, it just happens to have a (co)domain that also looks like a function (a -> f b). If we make the outside type constructor a data type with its own Category instance, and just go `Accessor s a` then it either loses its ability to change out types in s -- e.g. change out the type of the second member in a pair, or it loses its ability to compose. We gave up the latter to make Gundry's proposal work as we were forced into that shape by trying to return a combinators that could be overloaded to act as an existing accessor function. To keep categorical composition for the accessor, you might at first think we can use a product kind or something to get Accessor '(s,t) '(a,b) with both indices but that gets stuck when you go to define `id`, so necessarily such a version of things winds up needing its own set of combinators. -Edward

I'm also rather worried, looking over the IV proposal, that it just doesn't
actually work.
We actually tried the code under "Haskell 98 records" back when Gundry
first started his proposal and it fell apart when you went to compose them.
A fundep/class associated type in the class is a stronger constraint that a
type equality defined on an individual instance.
I don't see how
@foo . @bar . @baz
(or #foo . #bar . #baz as would be written under the concrete proposal on
the wiki)
is ever supposed to figure out the intermediate types when working
polymorphically in the data type involved.
What happens when the type of that chain of accessors is left to inference?
You get stuck wallowing in AllowAmbiguousTypes territory:
(#foo . #bar . #baz) :: (IV "foo" (c -> d), IV "bar" (b -> c), IV "baz" (a
-> b)) => a -> d
has a variables 'b' and 'c' that don't occur on the right hand side, and
which are only determinable by knowing that the instances you expect to see
look something like:
instance (a ~ Bool) => IV "x" (S -> a) where
iv (MkS x) = x
but that is too weak to figure out that "S" determines "a" unless S is
already known, even if we just limit ourselves to field accessors as
functions.
-Edward
On Mon, Jan 26, 2015 at 7:43 PM, Edward Kmett
On Mon, Jan 26, 2015 at 4:41 PM, Simon Peyton Jones
wrote:
Personally, I don't like the sigil mangled version at all.
You don’t comment on the relationship with implicit parameters. Are they ok with sigils?
I don't have too many opinions about implicit parameters, but they don't really see a lot of use, which makes me somewhat leery of copying the pattern. ;)
If it is then further encumbered by a combinator it is now several
symbols longer at every single use site than other alternatives put forth in this thread. =(
No, as Nikita says, under the “Redesign” proposal it would be #bar . #baz
The problem is that if you make #bar an instance of Category so that it can use (.) then it will fail to allow type changing re-assignment.
The import Field trick is magic, yes, but it has the benefit of being the first approach I've seen where the resulting syntax can be as light as what the user can generate by hand today.
That’s why I added it to the “Redesign” page. It seems viable to me; a clever idea, thank you. Still, personally I prefer #x because of the link with implicit parameters. It would be interesting to know what others think.
Admittedly @bar . @baz has the benefit that it introduces no namespacing conflicts at all.
If we really had to go with some kind of sigil based solution, I _could_ rally behind that one, but I do like it a lot less than the import trick, if only because the import trick gets rid of that last '@' and space we have on every accessor and you have to admit that the existing
foo^.bar.baz.quux
idiom reads a lot more cleanly than
foo ^. @bar . @baz . @quux
ever can.
(I used @foo above because it avoids any potential conflict with existing user code as @ isn't a legal operator)
I'm confess, I, like many in this thread, am less than comfortable with
the notion of bringing chunks of lens into base. Frankly, I'd casually dismissed such concerns as a job for Haskell 2025. ;) However, I've been trying to consider it with an open mind here, because the alternatives proposed thus far lock in uglier code than the status quo with more limitations while simultaneously being harder to explain.
I don’t think anyone is suggesting adding any of lens are they? Which bits did you think were being suggested for addition?
I was mostly referring to the use of the (a -> f b) -> s -> f t form.
Note: once you start using a data-type then (.) necessarily fails to allow you to ever do type changing assignment, due to the shape of Category, or you have to use yet another operator, so that snippet cannot work without giving up something we can do today. OTOH: Using the lens-style story, no types are needed here that isn't already available in base and, done right, no existing operators need be stolen from the user, and type changing assignment is trivial.
I’m afraid I couldn’t understand this paragraph at all. Perhaps some examples would help, to illustrate
what you mean?
I was writing that paragraph in response to your query if it'd make sense to have the @foo return some data type: It comes at a rather high cost.
Lens gets away with using (.) to compose because its really using functions, with a funny mapM-like shape (a -> f b) -> (s -> f t) is still a function on the outside, it just happens to have a (co)domain that also looks like a function (a -> f b).
If we make the outside type constructor a data type with its own Category instance, and just go `Accessor s a` then it either loses its ability to change out types in s -- e.g. change out the type of the second member in a pair, or it loses its ability to compose.
We gave up the latter to make Gundry's proposal work as we were forced into that shape by trying to return a combinators that could be overloaded to act as an existing accessor function.
To keep categorical composition for the accessor, you might at first think we can use a product kind or something to get Accessor '(s,t) '(a,b) with both indices but that gets stuck when you go to define `id`, so necessarily such a version of things winds up needing its own set of combinators.
-Edward

Yes, we can't make IV the magic class for which instances are generated. As I pointed out earlier in the thread, we need to give an instance for the function space that enforces the functional dependency (either with an actual fundep or a type family), and keep a distinguished HasField class. AFAICS it's still an open question as to whether that instance should provide (a) selector functions r -> a (b) lenses (a -> f b) -> s -> f t (c) both (d) neither but I'm starting to think (b) is the sanest option. Otherwise, I think we've more or less converged on the issues (apart from the syntax question) and I'll update the wiki page appropriately. On the syntax question, Edward, could you say more about how you would expect the magic imports to work? If a module both declares (or imports) a record field `x` and magically imports `x`, what does a use of `x` mean? (In the original ORF, we didn't have the magic module, but just said that record fields were automatically polymorphic... that works but is a bit fiddly in the renamer, and isn't a conservative extension.) Adam On 27/01/15 00:59, Edward Kmett wrote:
I'm also rather worried, looking over the IV proposal, that it just doesn't actually work.
We actually tried the code under "Haskell 98 records" back when Gundry first started his proposal and it fell apart when you went to compose them.
A fundep/class associated type in the class is a stronger constraint that a type equality defined on an individual instance.
I don't see how
@foo . @bar . @baz
(or #foo . #bar . #baz as would be written under the concrete proposal on the wiki)
is ever supposed to figure out the intermediate types when working polymorphically in the data type involved.
What happens when the type of that chain of accessors is left to inference? You get stuck wallowing in AllowAmbiguousTypes territory:
(#foo . #bar . #baz) :: (IV "foo" (c -> d), IV "bar" (b -> c), IV "baz" (a -> b)) => a -> d
has a variables 'b' and 'c' that don't occur on the right hand side, and which are only determinable by knowing that the instances you expect to see look something like:
instance (a ~ Bool) => IV "x" (S -> a) where iv (MkS x) = x
but that is too weak to figure out that "S" determines "a" unless S is already known, even if we just limit ourselves to field accessors as functions.
-Edward
-- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

Adam, are you willing to update the wiki page to reflect the latest state of the conversation, identifying remaining choices? That would be v helpful. Simon | -----Original Message----- | From: Adam Gundry [mailto:adam@well-typed.com] | Sent: 27 January 2015 09:07 | To: Edward Kmett; Simon Peyton Jones | Cc: Simon Marlow; ghc-devs@haskell.org | Subject: Re: GHC support for the new "record" package | | Yes, we can't make IV the magic class for which instances are | generated. | As I pointed out earlier in the thread, we need to give an instance | for the function space that enforces the functional dependency (either | with an actual fundep or a type family), and keep a distinguished | HasField class. AFAICS it's still an open question as to whether that | instance should provide | | (a) selector functions r -> a | (b) lenses (a -> f b) -> s -> f t | (c) both | (d) neither | | but I'm starting to think (b) is the sanest option. | | Otherwise, I think we've more or less converged on the issues (apart | from the syntax question) and I'll update the wiki page appropriately. | | On the syntax question, Edward, could you say more about how you would | expect the magic imports to work? If a module both declares (or | imports) a record field `x` and magically imports `x`, what does a use | of `x` mean? (In the original ORF, we didn't have the magic module, | but just said that record fields were automatically polymorphic... | that works but is a bit fiddly in the renamer, and isn't a | conservative extension.) | | Adam | | | On 27/01/15 00:59, Edward Kmett wrote: | > I'm also rather worried, looking over the IV proposal, that it just | > doesn't actually work. | > | > We actually tried the code under "Haskell 98 records" back when | Gundry | > first started his proposal and it fell apart when you went to | compose them. | > | > A fundep/class associated type in the class is a stronger constraint | > that a type equality defined on an individual instance. | > | > I don't see how | > | > @foo . @bar . @baz | > | > (or #foo . #bar . #baz as would be written under the concrete | proposal | > on the wiki) | > | > is ever supposed to figure out the intermediate types when working | > polymorphically in the data type involved. | > | > What happens when the type of that chain of accessors is left to | > inference? You get stuck wallowing in AllowAmbiguousTypes territory: | > | > (#foo . #bar . #baz) :: (IV "foo" (c -> d), IV "bar" (b -> c), IV | "baz" | > (a -> b)) => a -> d | > | > has a variables 'b' and 'c' that don't occur on the right hand side, | > and which are only determinable by knowing that the instances you | > expect to see look something like: | > | > instance (a ~ Bool) => IV "x" (S -> a) where | > iv (MkS x) = x | > | > but that is too weak to figure out that "S" determines "a" unless S | is | > already known, even if we just limit ourselves to field accessors as | > functions. | > | > -Edward | | | -- | Adam Gundry, Haskell Consultant | Well-Typed LLP, http://www.well-typed.com/

On 27/01/15 09:16, Simon Peyton Jones wrote:
Adam, are you willing to update the wiki page to reflect the latest state of the conversation, identifying remaining choices? That would be v helpful.
I'm on it now. It'll take a little while because I'm merging plans A and B into a single coherent story. Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

Edward: Note that x = #x is a perfectly legal definition, and now you
can have your lenses exactly as before. When discussing this with
Simon, I actually proposed that x = #x be automatically generated by
the data definitions, and then nub'd after. Not sure it's a good idea
or not, but it's certainly possible.
I was also of the opinion that data should produce FieldSelector
classes etc, but _not_ link them to IV, specifically to avoid problems
with the stab lenses. I expect that if you only wanted stab lenses,
and never selectors, you could (probably) tie them up in a way that
did the resolution nicely without ambiguity problems.
With those two pieces I think you can still have your: foo^.bar.baz.quux
Thanks, Neil
On Tue, Jan 27, 2015 at 9:19 AM, Adam Gundry
On 27/01/15 09:16, Simon Peyton Jones wrote:
Adam, are you willing to update the wiki page to reflect the latest state of the conversation, identifying remaining choices? That would be v helpful.
I'm on it now. It'll take a little while because I'm merging plans A and B into a single coherent story.
Adam
-- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

On 27/01/15 09:19, Adam Gundry wrote:
On 27/01/15 09:16, Simon Peyton Jones wrote:
Adam, are you willing to update the wiki page to reflect the latest state of the conversation, identifying remaining choices? That would be v helpful.
I'm on it now. It'll take a little while because I'm merging plans A and B into a single coherent story.
Done. As I understand it, the key remaining choices (flagged up with the phrase "Design question" are): 1. What are the IV instances provided in base? These could give selector functions, lenses, both or neither. 2. How do we identify implicit values? Either we have a syntactic cue (like `#` or `@`) or we do some magic in the renamer. - If the former, are normal unambiguous record selectors available as well? Or do we allow/generate definitions like x = #x, as Neil suggests? - If the latter, what happens when a record field and an implicit value are both in scope? Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

| 1. What are the IV instances provided in base? These could give | selector functions, lenses, both or neither. My instinct: just selector functions. Leave lenses for a lens package. I still have not understood the argument for lenses being a function rather that a newtype wrapping that function; apart from the (valuable) ability to re-use ordinary (.), which is cute. Edward has explained this several time, but I have failed to understand. Simon

On Tue, Jan 27, 2015 at 6:12 AM, Simon Peyton Jones
| 1. What are the IV instances provided in base? These could give | selector functions, lenses, both or neither.
My instinct: just selector functions. Leave lenses for a lens package.
How do these selectors actually typecheck when composed? Ignoring lenses all together for the moment, I don't see how IV works.
I still have not understood the argument for lenses being a function rather that a newtype wrapping that function; apart from the (valuable) ability to re-use ordinary (.), which is cute. Edward has explained this several time, but I have failed to understand.
You can make a data type data Lens s a = Lens (s -> a) (a -> s -> s) or newtype Lens s a = Lens (s -> (a, a -> s)) The latter is basically the approach I used to take in my old data-lens library. This works great for lenses that don't let you change types. You can write a Category instance for this notion of lens. You can make it compose the way functions normally compose (or you can flip the arguments and make it compose the way lenses in the lens library do, here you have an option.) Now, expand it to let you do type changing assignment. newtype Lens s t a b = Lens (s -> a) (s -> b -> t) Now we have 4 arguments, but Category wants 2. I've punted a way-too-messy aside about why 4 arguments are used to the end. [*] You can come up with a horrible way in which you can encode a GADT data Lens :: (*,*) -> (*,*) -> * where Lens :: (s -> a) -> (s -> b -> t) -> Lens '(s,t) '(a,b) but when you go to define instance Category Lens where id = ... you'd get stuck, because we can't prove that all inhabitants of (*,*) look like '(a,b) for some types a and b. On the other hand, you can make the data type too big data Lens :: * -> * -> * where Lens :: (s -> a) -> (s -> b -> t) -> Lens (s,t) (a,b) Id :: Lens a a but now you can distinguish too much information, GHC is doing case analysis everywhere, etc. Performance drops like a stone and it doesn't fit the abstraction. In short, using a dedicated data type costs you access to (.) for composition or costs you the ability to let the types change. -Edward [*] Why 4 arguments? We can make up our own combinators for putting these things together, but we can't use (.) from the Prelude or even from Control.Category. There are lots of ways to motivate the 4 argument version: Logically there are two type families involved the 'inner' family and the 'outer' one and the lens type looks like outer i is isomorphic to the pair of some 'complement' that doesn't depend on the index i, and some inner i. outer i <-> (complement, inner i) We can't talk about such families in Haskell though, we need them to compose by pullback/unification, so we fake it by using two instantiations of the schema outer i -> (inner i, inner j -> outer j) which is enough for 99% of the things a user wants to say with a lens or field accessor.

On Tue, Jan 27, 2015 at 6:47 PM, Edward Kmett
This works great for lenses that don't let you change types.
This is not the only restriction required for this to be an acceptable solution. As soon as you have a distinct Lens type, and use something Category-like for composition, you are limiting yourself to composing two lenses to get back a lens (barring a terrible mptc 'solution'). And that is weak. The only reason I (personally) think lens pulls its weight, and is worth using (unlike every prior lens library, which I never bothered with), is the ability for lenses, prisms, ismorphisms, traversals, folds, etc. to properly degrade to one another and compose automatically. So if we're settling on a nominal Lens type in a proposal, then it is automatically only good for one thing to me: defining values of the better lens type. -- Dan

As soon as you have a distinct Lens type, and use something Category-like for composition, you are limiting yourself to composing two lenses to get back a lens (barring a terrible mptc 'solution'). And that is weak. The only reason I (personally) think lens pulls its weight, and is worth using (unlike every prior lens library, which I never bothered with), is the ability for lenses, prisms, ismorphisms, traversals, folds, etc. to properly degrade to one another and compose automatically.
Aha. I keep asking whether it’s just the cute ability to re-use (.) that justifies the lack of abstraction in the Lens type. But Dan’s comment has made me remember something from my own talk on the subject. Here are the types of lenses and traversals (2-parameter versions):
type Lens’ s a = forall f. Functor f
=> (a -> f a) -> (s -> f s)
type Traversal’ s a = forall f. Applicative f
=> (a -> f a) -> (s -> f s)
Suppose we have
ln1 :: Lens' s1 s2
tr1 :: Traversal' s1 s2
ln2 :: Lens' s2 a
tr2 :: Traversal' s2 a
Now these compositions are all well typed
ln1 . ln2 :: Lens' s1 a
tr1 . tr2 :: Traversal' s1 a
tr1 . ln2 :: Traversal' s1 a
ln1 . tr2 :: Traversal' s1 a
which is quite remarkable. If Lens’ and Traversal’ were newtypes, you’d need four different operators. (I think that what Dan means by “a terrible mptc solution” is trying to overload those four operators into one.)
I don’t know if this exhausts the reasons that lenses are not abstract. I would love to know more, explained in a smilar style.
Incidentally has anyone explored this?
newtype PolyLens c s a = PL (forall f. c f => (a -> f a) -> s -> f s)
I’ve just abstracted over the Functor/Applicative part, so that Lens’ and Traversal’ are both PolyLenses. Now perhaps we can do (.), with a type like
(.) :: PolyLens c1 s1 s2 -> PolyLens c2 s2 a -> PolyLens (And c1 c2) s1 a
where And is a type function
type instance And Functor Applicative = Applicative
etc
I have no idea whether this could be made to work out, but it seems like an obvious avenue so I wonder if anyone has explored it.
Simon
From: Dan Doel [mailto:dan.doel@gmail.com]
Sent: 28 January 2015 00:27
To: Edward Kmett
Cc: Simon Peyton Jones; ghc-devs@haskell.org
Subject: Re: GHC support for the new "record" package
On Tue, Jan 27, 2015 at 6:47 PM, Edward Kmett

The MPTC solution I was thinking of was similar to the approach one would
take to making arithmetic work like it does in most other languages. I.E.
class Arithmetic a b c | a b -> c where
(+) :: a -> b -> c
...
One takes that class and then enumerates instances for all combinations of
auto-promotions that should take place.
The And type function is similar to this, although it's cleaner than having
separate types and an MPTC (I think). Maybe it could be made to work, but I
think it's still difficult, and worry that it might never work as well as
what lens currently does.
For instance, when we go to write (.), how do we prove that:
forall f. (c1 f, c2 f) => (a -> f b) -> s -> f t
(which is what we'll have) is the same type as:
forall f. And c1 c2 f => (a -> f b) -> s -> f t
(which is what we'll need)?
If we define And by cases, it will not reduce when c1 and c2 are opaque.
But we must define And by cases or it won't be able to reduce things like
'And Applicative Functor' to just 'Applicative' automatically, which is
also important (and we probably wouldn't be able to partially apply it).
We could also define (.) by cases, but then we are back to enumerating all
the combinations of lens-like things, Arithmetic style. And all with
identical implementations.
Perhaps this style solution is acceptable due to lens having a closed set
of interacting types (or does it). But it seems a lot messier at first
blush.
-- Dan
On Wed, Jan 28, 2015 at 5:32 AM, Simon Peyton Jones
As soon as you have a distinct Lens type, and use something Category-like for composition, you are limiting yourself to composing two lenses to get back a lens (barring a terrible mptc 'solution'). And that is weak. The only reason I (personally) think lens pulls its weight, and is worth using (unlike every prior lens library, which I never bothered with), is the ability for lenses, prisms, ismorphisms, traversals, folds, etc. to properly degrade to one another and compose automatically.
Aha. I keep asking whether it’s just the cute ability to re-use (.) that justifies the lack of abstraction in the Lens type. But Dan’s comment has made me remember something from my own talk on the subject. Here are the types of lenses and traversals (2-parameter versions):
type Lens’ s a = forall f. Functor f
=> (a -> f a) -> (s -> f s)
type Traversal’ s a = forall f. Applicative f
=> (a -> f a) -> (s -> f s)
Suppose we have
ln1 :: Lens' s1 s2
tr1 :: Traversal' s1 s2
ln2 :: Lens' s2 a
tr2 :: Traversal' s2 a
Now these compositions are all well typed
ln1 . ln2 :: Lens' s1 a
tr1 . tr2 :: Traversal' s1 a
tr1 . ln2 :: Traversal' s1 a
ln1 . tr2 :: Traversal' s1 a
which is quite remarkable. If Lens’ and Traversal’ were newtypes, you’d need four different operators. (I think that what Dan means by “a terrible mptc solution” is trying to overload those four operators into one.)
I don’t know if this exhausts the reasons that lenses are not abstract. I would love to know more, explained in a smilar style.
Incidentally has anyone explored this?
newtype PolyLens c s a = PL (forall f. c f => (a -> f a) -> s -> f s)
I’ve just abstracted over the Functor/Applicative part, so that Lens’ and Traversal’ are both PolyLenses. Now perhaps we can do (.), with a type like
(.) :: PolyLens c1 s1 s2 -> PolyLens c2 s2 a -> PolyLens (And c1 c2) s1 a
where And is a type function
type instance And Functor Applicative = Applicative
etc
I have no idea whether this could be made to work out, but it seems like an obvious avenue so I wonder if anyone has explored it.
Simon
*From:* Dan Doel [mailto:dan.doel@gmail.com] *Sent:* 28 January 2015 00:27 *To:* Edward Kmett *Cc:* Simon Peyton Jones; ghc-devs@haskell.org *Subject:* Re: GHC support for the new "record" package
On Tue, Jan 27, 2015 at 6:47 PM, Edward Kmett
wrote: This works great for lenses that don't let you change types.
This is not the only restriction required for this to be an acceptable solution.
As soon as you have a distinct Lens type, and use something Category-like for composition, you are limiting yourself to composing two lenses to get back a lens (barring a terrible mptc 'solution'). And that is weak. The only reason I (personally) think lens pulls its weight, and is worth using (unlike every prior lens library, which I never bothered with), is the ability for lenses, prisms, ismorphisms, traversals, folds, etc. to properly degrade to one another and compose automatically. So if we're settling on a nominal Lens type in a proposal, then it is automatically only good for one thing to me: defining values of the better lens type.
-- Dan

Alas, the 'f' isn't the only thing in the lens library signatures that gets
overloaded in practice.
Isomorphisms and prisms overload the shape to look more like p a (f b) -> p
s (f t), rather than (a -> f b) -> s -> f t
Indexing, which matters for folding and traversing over containers with
keys overloads with a shape like p a (f b) -> s -> f t
By the time you reach that level of generality, and add type-changing, the
newtype is sort of just dangling there actively getting in the way and
providing no actual encapsulation.
Now, you could make up a bunch of individual ad hoc data types for all the
different lens types we happen to know about today.
However, it is deeply insightful that it is the form that lenses take that
let us _find_ all the different lens-likes that we use today.
Half of them we had no idea were out there until we spent time exploring
the impact of the design we have.
Switching to a representation where these things arise from O(n^2) ad-hoc
rules rather than the existing relationships between mostly "common sense"
classes seems like a poor trade.
In scala Julien Truffaut has a library called Monocle, which aspires to be
a port of the ideas of lens to Scala. Due to the vagaries of the language
the only option they have open to them is to implement things the way you
are looking at exploring here. It doesn't work out well. Vastly more effort
yields a library full of boilerplate that handles a much smaller scope and
yields no insight into why these things are related.
-Edward
On Wed, Jan 28, 2015 at 5:32 AM, Simon Peyton Jones
As soon as you have a distinct Lens type, and use something Category-like for composition, you are limiting yourself to composing two lenses to get back a lens (barring a terrible mptc 'solution'). And that is weak. The only reason I (personally) think lens pulls its weight, and is worth using (unlike every prior lens library, which I never bothered with), is the ability for lenses, prisms, ismorphisms, traversals, folds, etc. to properly degrade to one another and compose automatically.
Aha. I keep asking whether it’s just the cute ability to re-use (.) that justifies the lack of abstraction in the Lens type. But Dan’s comment has made me remember something from my own talk on the subject. Here are the types of lenses and traversals (2-parameter versions):
type Lens’ s a = forall f. Functor f
=> (a -> f a) -> (s -> f s)
type Traversal’ s a = forall f. Applicative f
=> (a -> f a) -> (s -> f s)
Suppose we have
ln1 :: Lens' s1 s2
tr1 :: Traversal' s1 s2
ln2 :: Lens' s2 a
tr2 :: Traversal' s2 a
Now these compositions are all well typed
ln1 . ln2 :: Lens' s1 a
tr1 . tr2 :: Traversal' s1 a
tr1 . ln2 :: Traversal' s1 a
ln1 . tr2 :: Traversal' s1 a
which is quite remarkable. If Lens’ and Traversal’ were newtypes, you’d need four different operators. (I think that what Dan means by “a terrible mptc solution” is trying to overload those four operators into one.)
I don’t know if this exhausts the reasons that lenses are not abstract. I would love to know more, explained in a smilar style.
Incidentally has anyone explored this?
newtype PolyLens c s a = PL (forall f. c f => (a -> f a) -> s -> f s)
I’ve just abstracted over the Functor/Applicative part, so that Lens’ and Traversal’ are both PolyLenses. Now perhaps we can do (.), with a type like
(.) :: PolyLens c1 s1 s2 -> PolyLens c2 s2 a -> PolyLens (And c1 c2) s1 a
where And is a type function
type instance And Functor Applicative = Applicative
etc
I have no idea whether this could be made to work out, but it seems like an obvious avenue so I wonder if anyone has explored it.
Simon
*From:* Dan Doel [mailto:dan.doel@gmail.com] *Sent:* 28 January 2015 00:27 *To:* Edward Kmett *Cc:* Simon Peyton Jones; ghc-devs@haskell.org *Subject:* Re: GHC support for the new "record" package
On Tue, Jan 27, 2015 at 6:47 PM, Edward Kmett
wrote: This works great for lenses that don't let you change types.
This is not the only restriction required for this to be an acceptable solution.
As soon as you have a distinct Lens type, and use something Category-like for composition, you are limiting yourself to composing two lenses to get back a lens (barring a terrible mptc 'solution'). And that is weak. The only reason I (personally) think lens pulls its weight, and is worth using (unlike every prior lens library, which I never bothered with), is the ability for lenses, prisms, ismorphisms, traversals, folds, etc. to properly degrade to one another and compose automatically. So if we're settling on a nominal Lens type in a proposal, then it is automatically only good for one thing to me: defining values of the better lens type.
-- Dan

In scala Julien Truffaut has a library called Monocle, which aspires to be a port of the ideas of lens to Scala. Due to the vagaries of the language the only option they have open to them is to implement things the way you are looking at exploring here. It doesn't work out well. Vastly more effort yields a library full of boilerplate that handles a much smaller scope and yields no insight into why these things are related.
I dispute nothing that you say. But I don’t think a Scala failure should discourage us. Scala doesn’t let you abstract over higher kinds (or at least it’s very tricky because of variance); and I’m pretty sure it doesn’t let you abstract over constraints. I’m not saying that these are enough; just that it shouldn’t necessarily discourage us too much.
On one point:
Isomorphisms and prisms overload the shape to look more like p a (f b) -> p s (f t), rather than (a -> f b) -> s -> f t. Indexing, which matters for folding and traversing over containers with keys overloads with a shape like p a (f b) -> s -> f t
So maybe we’d need
newtype SuperLens c p q s t a b = SL (forall f. c f => p a (f b) -> q s (f t))
You may say that the SuperLens newtype it is “dangling there”, and you might be right, but it remains something that can drive a type-class story.
You may well be right; but I’d be very interested in an investigation that followed it through. (Dan’s observations about And are well taken.)
Simon
From: Edward Kmett [mailto:ekmett@gmail.com]
Sent: 29 January 2015 03:52
To: Simon Peyton Jones
Cc: Dan Doel; ghc-devs@haskell.org
Subject: Re: GHC support for the new "record" package
Alas, the 'f' isn't the only thing in the lens library signatures that gets overloaded in practice.
By the time you reach that level of generality, and add type-changing, the newtype is sort of just dangling there actively getting in the way and providing no actual encapsulation.
Now, you could make up a bunch of individual ad hoc data types for all the different lens types we happen to know about today.
However, it is deeply insightful that it is the form that lenses take that let us _find_ all the different lens-likes that we use today.
Half of them we had no idea were out there until we spent time exploring the impact of the design we have.
Switching to a representation where these things arise from O(n^2) ad-hoc rules rather than the existing relationships between mostly "common sense" classes seems like a poor trade.
In scala Julien Truffaut has a library called Monocle, which aspires to be a port of the ideas of lens to Scala. Due to the vagaries of the language the only option they have open to them is to implement things the way you are looking at exploring here. It doesn't work out well. Vastly more effort yields a library full of boilerplate that handles a much smaller scope and yields no insight into why these things are related.
-Edward
On Wed, Jan 28, 2015 at 5:32 AM, Simon Peyton Jones

On Tue, Jan 27, 2015 at 3:47 PM, Edward Kmett
We can make up our own combinators for putting these things together, but we can't use (.) from the Prelude or even from Control.Category.
Is this the only reason *not* to have a data type? (Sorry, I wasn't totally following the GADT-nastics!) That is, if, for a moment, we just assume a different operator for composing lenses, then will a data/newtype work? Now, *if* (as I understand it), under IV (assuming it work), it works for lens libraries iff they use a data/newtype for the lens (so that their instance is *the* instance for ->, I'm guessing)...... *then*, I say using a different operator for compose is a small price to pay. (Well, as I said before, I'd actually prefer a different compose operator!) Mind you, I might be totally mis-understanding the arguments and reasoning!

Ignoring lenses all together for the moment, I don't see how IV works.
Could you take a look at the current version of https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesig...
and the give an example of something problematic. You may well be right, but it’s hard to know without something specific to bite on.
You can make a data type
data Lens s a = Lens (s -> a) (a -> s -> s)
You could, but that would be very different to your lovely lenses today, and it is certainly not what I was suggesting. All I was suggesting was
newtype Lens s t a b = L { unwrap :: forall f. Functorhttp://hackage.haskell.org/package/base-4.7.0.1/docs/Control-Monad.html#t:Fu... f => (a -> f b) -> s -> f t }
Just a wrapper around the precise type you use today. So presumably anything whatsoever that you can do today, you can also do by saying (unwrap l).
Yes, that means you can’t use ordinary function composition (.) for these wrapped lenses. I agree that’s a pity. Perhaps this single point is so important that it justifies breaking abstraction. But breaking abstractions comes with costs, to error messages, and to future evolution.
Are there other costs to having the abstraction, or is it just (.)? After all, the lens combinators themselves can wrap and unwrap to their heart’s content; it’s just the clients of the library that we care about here.
Simon
From: Edward Kmett [mailto:ekmett@gmail.com]
Sent: 27 January 2015 23:48
To: Simon Peyton Jones
Cc: Adam Gundry; ghc-devs@haskell.org
Subject: Re: GHC support for the new "record" package
On Tue, Jan 27, 2015 at 6:12 AM, Simon Peyton Jones

Hi Simon,
One problem with the newtype approach is that you can no longer write a
single function that can work with all of lenses, traversals or other
optics. For example, in the current lens library, set can be used for
prism, lens, iso and traversal. That would just not be possible when using
a newtype.
Regards,
Benno
Simon Peyton Jones
Ignoring lenses all together for the moment, I don't see how IV works.
Could you take a look at the *current* version of https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesig...
and the give an example of something problematic. You may well be right, but it’s hard to know without something specific to bite on.
You can make a data type
data Lens s a = Lens (s -> a) (a -> s -> s)
You could, but that would be very different to your lovely lenses today, and it is certainly not what I was suggesting. All I was suggesting was
newtype Lens s t a b = L { unwrap :: forall f. Functor http://hackage.haskell.org/package/base-4.7.0.1/docs/Control-Monad.html#t:Fu... f => (a -> f b) -> s -> f t }
Just a wrapper around the precise type you use today. So presumably anything whatsoever that you can do today, you can also do by saying (unwrap l).
Yes, that means you can’t use ordinary function composition (.) for these wrapped lenses. I agree that’s a pity. Perhaps this single point is so important that it justifies breaking abstraction. But breaking abstractions comes with costs, to error messages, and to future evolution.
Are there other costs to having the abstraction, or is it just (.)? After all, the lens combinators themselves can wrap and unwrap to their heart’s content; it’s just the clients of the library that we care about here.
Simon
*From:* Edward Kmett [mailto:ekmett@gmail.com] *Sent:* 27 January 2015 23:48 *To:* Simon Peyton Jones *Cc:* Adam Gundry; ghc-devs@haskell.org
*Subject:* Re: GHC support for the new "record" package
On Tue, Jan 27, 2015 at 6:12 AM, Simon Peyton Jones
wrote: | 1. What are the IV instances provided in base? These could give | selector functions, lenses, both or neither.
My instinct: just selector functions. Leave lenses for a lens package.
How do these selectors actually typecheck when composed?
Ignoring lenses all together for the moment, I don't see how IV works.
I still have not understood the argument for lenses being a function rather that a newtype wrapping that function; apart from the (valuable) ability to re-use ordinary (.), which is cute. Edward has explained this several time, but I have failed to understand.
You can make a data type
data Lens s a = Lens (s -> a) (a -> s -> s)
or
newtype Lens s a = Lens (s -> (a, a -> s))
The latter is basically the approach I used to take in my old data-lens library.
This works great for lenses that don't let you change types.
You can write a Category instance for this notion of lens.
You can make it compose the way functions normally compose (or you can flip the arguments and make it compose the way lenses in the lens library do, here you have an option.)
Now, expand it to let you do type changing assignment.
newtype Lens s t a b = Lens (s -> a) (s -> b -> t)
Now we have 4 arguments, but Category wants 2.
I've punted a way-too-messy aside about why 4 arguments are used to the end. [*]
You can come up with a horrible way in which you can encode a GADT
data Lens :: (*,*) -> (*,*) -> * where
Lens :: (s -> a) -> (s -> b -> t) -> Lens '(s,t) '(a,b)
but when you go to define
instance Category Lens where
id = ...
you'd get stuck, because we can't prove that all inhabitants of (*,*) look like '(a,b) for some types a and b.
On the other hand, you can make the data type too big
data Lens :: * -> * -> * where
Lens :: (s -> a) -> (s -> b -> t) -> Lens (s,t) (a,b)
Id :: Lens a a
but now you can distinguish too much information, GHC is doing case analysis everywhere, etc.
Performance drops like a stone and it doesn't fit the abstraction.
In short, using a dedicated data type costs you access to (.) for composition or costs you the ability to let the types change.
-Edward
[*] Why 4 arguments?
We can make up our own combinators for putting these things together, but we can't use (.) from the Prelude or even from Control.Category.
There are lots of ways to motivate the 4 argument version:
Logically there are two type families involved the 'inner' family and the 'outer' one and the lens type looks like
outer i is isomorphic to the pair of some 'complement' that doesn't depend on the index i, and some inner i.
outer i <-> (complement, inner i)
We can't talk about such families in Haskell though, we need them to compose by pullback/unification, so we fake it by using two instantiations of the schema
outer i -> (inner i, inner j -> outer j)
which is enough for 99% of the things a user wants to say with a lens or field accessor.
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

On Tue, Jan 27, 2015 at 4:07 AM, Adam Gundry
AFAICS it's still an open question as to whether that instance should provide
(a) selector functions r -> a (b) lenses (a -> f b) -> s -> f t (c) both (d) neither
but I'm starting to think (b) is the sanest option.
Glad I'm not the only voice in the wilderness ;) On the syntax question, Edward, could you say more about how you would
expect the magic imports to work? If a module both declares (or imports) a record field `x` and magically imports `x`, what does a use of `x` mean? (In the original ORF, we didn't have the magic module, but just said that record fields were automatically polymorphic... that works but is a bit fiddly in the renamer, and isn't a conservative extension.)
The straw man I was offering when this was just about {| foo :: .., ... |} -style records would be to have those bring into scope the Field.foo lenses by default as a courtesy, since there is nothing involved in that that necessarily ever defines a normal field accessor. I'm very much not convinced one way or the other if such a courtesy import would be better than requiring the user to do it by hand. It is when we start mixing this with ORF that things get confusing, which is of course why we're having this nice big discussion. Having definitions we bring from that module able to be used with normal records via something like the ORF makes sense. It invites some headaches though, as higher-rank fields seem to be a somewhat insurmountable obstacle to the latter, whereas they can be unceremoniously ignored in anonymous records, since they didn't exist before. As Neil noted, you _can_ write `foo = @foo` to make such an accessor have the lighter weight syntax. Of course, once folks start using template haskell to do so, we get right back to where we are today. It also invites the question of where such exports should be made. I'm less sanguine about the proposed IV class, as it doesn't actually work in its current incarnation in the proposal as mentioned above. Assuming it has been modified to actually compose and infer, the benefit of the `import Field (...)` or naked @foo approach is that if two modules bring in the same field they are both compatible when imported into a third module. One half-way serious option might be to have that Field or Lens or whatever module just export `foo = @foo` definitions from a canonical place so they can be shared, and to decide if folks have to import it explicitly to use it. Then @foo could be the lens to get at the contents of the field, can do type changing assignment, and users can import the fields to avoid the noise. It confess, the solution there feels quite heavy, though. -Edward Adam
I'm also rather worried, looking over the IV proposal, that it just doesn't actually work.
We actually tried the code under "Haskell 98 records" back when Gundry first started his proposal and it fell apart when you went to compose
On 27/01/15 00:59, Edward Kmett wrote: them.
A fundep/class associated type in the class is a stronger constraint that a type equality defined on an individual instance.
I don't see how
@foo . @bar . @baz
(or #foo . #bar . #baz as would be written under the concrete proposal on the wiki)
is ever supposed to figure out the intermediate types when working polymorphically in the data type involved.
What happens when the type of that chain of accessors is left to inference? You get stuck wallowing in AllowAmbiguousTypes territory:
(#foo . #bar . #baz) :: (IV "foo" (c -> d), IV "bar" (b -> c), IV "baz" (a -> b)) => a -> d
has a variables 'b' and 'c' that don't occur on the right hand side, and which are only determinable by knowing that the instances you expect to see look something like:
instance (a ~ Bool) => IV "x" (S -> a) where iv (MkS x) = x
but that is too weak to figure out that "S" determines "a" unless S is already known, even if we just limit ourselves to field accessors as functions.
-Edward
-- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

I really like this proposal (except I would bike shed about the syntax for record selector to be dot, like in the majority of languages.)
In other languages dot is a binary operator, so that r.x selects the x field from record r.
In this proposal #x is a unary operator (more of a lexical modifier really), which returns a first-class composable value for the field. Only later does it meet a record. This is a big difference.
(Moreover, debating dot has proved a graveyard for previous discussions.)
I’m glad you like the base proposal.
Simon
From: Johan Tibell [mailto:johan.tibell@gmail.com]
Sent: 24 January 2015 01:05
To: Simon Peyton Jones
Cc: Adam Gundry; Iavor Diatchki; Simon Marlow; ghc-devs@haskell.org
Subject: Re: GHC support for the new "record" package
I really like this proposal (except I would bike shed about the syntax for record selector to be dot, like in the majority of languages.)
On Fri, Jan 23, 2015 at 3:41 PM, Simon Peyton Jones

On 23/01/15 23:41, Simon Peyton Jones wrote:
| I just | noticed that it effectively gives us a syntax for identifier-like Symbol | singletons, which could be useful in completely different contexts
Indeed so. I have written a major increment to https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesig... which people reading this thread may find interesting. Look for "Plan B".
For the first time I think I can see a nice, simple, elegant, orthogonal story.
Cunning, and very general. I like it. Cheers, Simon

On 2015-01-21 17:01, Johan Tibell wrote:
My thoughts mostly mirror those of Adam and Edward. [--snip--]
3) I don't think it's a good idea to have lots of functions be polymorphic in the record types of their arguments. If that falls out for free (like it does both in ORF and Nikita's proposals) that's nice, but I think anonymous records should be used sparsely.
To me, anonymous records look a lot like Go's interfaces, which structural typing I don't think is a great idea. Go's interfaces give the appearance of giving you more polymorphic functions (i.e. functions with arguments of type { f :: T, ... }), but you have to express the required laws on these record fields purely in terms of comments. With type class-based polymorphism you're somewhat more specific and deliberate when you state what kind of values your functions are polymorphic over. You don't just say "this value must support a function f :: T" but instead "this value must support a function f :: T, where the behavior of f is specified by the type class it's defined in". I also have extensive experience of duck typing from Python and there I think duck typing has not played out well (somewhat collaborate by the fact that Python is adding base classes so it's possible to talk about the laws I mentioned above.)
I don't think anyone's saying that type classes are going anywhere...?!? As a counterpoint to duck-typing-in-Python, IME *statically* checked duck typing works just fine. It's been ages since I programmed in O'Caml, but I cannot recall a single instance where a problem was caused by accidentally passing the incorrect wrong duck-ish parameter. (Other people's experience may differ, of course.) Do you have concrete experience with Go? I'd of course be skeptical of taking any lessons from Go in this regard due to the pervasiveness of the "empty interface" idiom as a replacement for parametric polymorphism -- there are usually lots of things that can match the empty interface -- but it'd be interesting to hear, nonetheless :). Regards,

There’s too much to absorb in this discussion at the moment and I’m
late to the party anyway, but I would like to make a small note on
syntax. Given that this is very similar to TRex both in behaviour and
syntactic means of construction, why not just take TRex’s actual
syntax? http://en.wikipedia.org/wiki/Hugs#Extensible_records
type Point2D = Rec (x::Coord, y::Coord)
point2D = (x=1, y=1) :: Point2D
(#x point)
It seems like it wouldn’t create any syntactical ambiguities (which is
probably why the Hugs developers chose it).
Ciao
On 20 January 2015 at 22:44, Simon Marlow
For those who haven't seen this, Nikita Volkov proposed a new approach to anonymous records, which can be found in the "record" package on Hackage: http://hackage.haskell.org/package/record
It had a *lot* of attention on Reddit: http://nikita-volkov.github.io/record/
Now, the solution is very nice and lightweight, but because it is implemented outside GHC it relies on quasi-quotation (amazing that it can be done at all!). It has some limitations because it needs to parse Haskell syntax, and Haskell is big. So we could make this a lot smoother, both for the implementation and the user, by directly supporting anonymous record syntax in GHC. Obviously we'd have to move the library code into base too.
This message is by way of kicking off the discussion, since nobody else seems to have done so yet. Can we agree that this is the right thing and should be directly supported by GHC? At this point we'd be aiming for 7.12.
Who is interested in working on this? Nikita?
There are various design decisions to think about. For example, when the quasi-quote brackets are removed, the syntax will conflict with the existing record syntax. The syntax ends up being similar to Simon's 2003 proposal http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html (there are major differences though, notably the use of lenses for selection and update).
I created a template wiki page: https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov
Cheers, Simon _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

Chris, this is great! Looks like we can even get rid of the Rec prefix!
-
A phrase in round braces and with :: is itself unambiguous in the type
context.
-
A phrase in round braces with = symbols is unambiguous in the expression
context.
Concerning the pattern context a solution needs to be found though. But the
two points above are enough for me to fall in love with this direction! The
{| braces had a too icky of a touch to them and the plain { required the
user to choose whether to use the standard record syntax or anonymous one
on the module scale, but not both.
2015-01-29 0:26 GMT+03:00 Christopher Done
There’s too much to absorb in this discussion at the moment and I’m late to the party anyway, but I would like to make a small note on syntax. Given that this is very similar to TRex both in behaviour and syntactic means of construction, why not just take TRex’s actual syntax? http://en.wikipedia.org/wiki/Hugs#Extensible_records
type Point2D = Rec (x::Coord, y::Coord) point2D = (x=1, y=1) :: Point2D (#x point)
It seems like it wouldn’t create any syntactical ambiguities (which is probably why the Hugs developers chose it).
Ciao
On 20 January 2015 at 22:44, Simon Marlow
wrote: For those who haven't seen this, Nikita Volkov proposed a new approach to anonymous records, which can be found in the "record" package on Hackage: http://hackage.haskell.org/package/record
It had a *lot* of attention on Reddit: http://nikita-volkov.github.io/record/
Now, the solution is very nice and lightweight, but because it is implemented outside GHC it relies on quasi-quotation (amazing that it can be done at all!). It has some limitations because it needs to parse Haskell syntax, and Haskell is big. So we could make this a lot smoother, both for the implementation and the user, by directly supporting anonymous record syntax in GHC. Obviously we'd have to move the library code into base too.
This message is by way of kicking off the discussion, since nobody else seems to have done so yet. Can we agree that this is the right thing and should be directly supported by GHC? At this point we'd be aiming for 7.12.
Who is interested in working on this? Nikita?
There are various design decisions to think about. For example, when the quasi-quote brackets are removed, the syntax will conflict with the existing record syntax. The syntax ends up being similar to Simon's 2003 proposal
http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html
(there are major differences though, notably the use of lenses for selection and update).
I created a template wiki page: https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov
Cheers, Simon _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

There is a problem with the old TRex syntax.
In a world with kind signatures and rank-2 types, it would appear that
type Point2D = Rec ( x :: Coord, y :: Coord)
is ambiguous.
Is Coord a kind signature being applied to x and y which are type variables
brought into scope implicitly as
type Point2D = forall (x :: Coord, y :: Coord) => Rec (x, y)
would make more explicit?
e.g.
type Lens s t a b = Functor f => (a -> f b) -> s -> f t
works today in ghc, even though f isn't explicitly scoped and elaborates to:
type Lens s t a b = forall f. Functor f => (a -> f b) -> s -> f t
-Edward
On Wed, Jan 28, 2015 at 4:48 PM, Nikita Volkov
Chris, this is great! Looks like we can even get rid of the Rec prefix!
-
A phrase in round braces and with :: is itself unambiguous in the type context. -
A phrase in round braces with = symbols is unambiguous in the expression context.
Concerning the pattern context a solution needs to be found though. But the two points above are enough for me to fall in love with this direction! The {| braces had a too icky of a touch to them and the plain { required the user to choose whether to use the standard record syntax or anonymous one on the module scale, but not both.
2015-01-29 0:26 GMT+03:00 Christopher Done
: There’s too much to absorb in this discussion at the moment and I’m late to the party anyway, but I would like to make a small note on syntax. Given that this is very similar to TRex both in behaviour and syntactic means of construction, why not just take TRex’s actual syntax? http://en.wikipedia.org/wiki/Hugs#Extensible_records
type Point2D = Rec (x::Coord, y::Coord) point2D = (x=1, y=1) :: Point2D (#x point)
It seems like it wouldn’t create any syntactical ambiguities (which is probably why the Hugs developers chose it).
Ciao
For those who haven't seen this, Nikita Volkov proposed a new approach to anonymous records, which can be found in the "record" package on Hackage: http://hackage.haskell.org/package/record
It had a *lot* of attention on Reddit: http://nikita-volkov.github.io/record/
Now, the solution is very nice and lightweight, but because it is implemented outside GHC it relies on quasi-quotation (amazing that it can be done at all!). It has some limitations because it needs to parse Haskell syntax, and Haskell is big. So we could make this a lot smoother, both for the implementation and the user, by directly supporting anonymous record syntax in GHC. Obviously we'd have to move the library code into base too.
This message is by way of kicking off the discussion, since nobody else seems to have done so yet. Can we agree that this is the right thing and should be directly supported by GHC? At this point we'd be aiming for 7.12.
Who is interested in working on this? Nikita?
There are various design decisions to think about. For example, when
quasi-quote brackets are removed, the syntax will conflict with the existing record syntax. The syntax ends up being similar to Simon's 2003
On 20 January 2015 at 22:44, Simon Marlow
wrote: the proposal http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html
(there are major differences though, notably the use of lenses for selection and update).
I created a template wiki page: https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov
Cheers, Simon _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

On 29 January 2015 at 00:40, Edward Kmett ekmett@gmail.com wrote: There is a problem with the old TRex syntax. In a world with kind signatures and rank-2 types, it would appear that type Point2D = Rec ( x :: Coord, y :: Coord ) is ambiguous. The kind-signature resemblance had occurred to me, but I’d assumed Hugs treated it as syntactical sugar like [record|{ x :: Coord, y :: Coord }|]. Apparently not.
participants (18)
-
Adam Gundry
-
Bardur Arantsson
-
Benno Fünfstück
-
Christopher Done
-
Dan Doel
-
Daniel Trstenjak
-
Edward Kmett
-
Greg Weber
-
Iavor Diatchki
-
Joe Hillenbrand
-
Johan Tibell
-
Konstantine Rybnikov
-
Mark Lentczner
-
Neil Mitchell
-
Nikita Volkov
-
Roman Cheplyaka
-
Simon Marlow
-
Simon Peyton Jones