Some thoughts on Type-Directed Name Resolution

There's a proposal at the moment to add support for TDNR to Haskell - to leverage "the power of the dot" (e.g. for intellisense). http://hackage.haskell.org/trac/haskell-prime/wiki/TypeDirectedNameResolutio... I approve of the goal, but I'd like to suggest a different approach. My basic idea is stolen from Bertrand Meyer (Object-Oriented Software Construction, second edition). Basically, a class *is* both a module and a type. Quote... Classes as modules Object orientation is primarily an architectural technique: its major effect is on the modular structure of software systems. The key role here is again played by classes. A class describes not just a type of objects but also a modular unit. In a pure object-oriented approach: Classes should be the only modules. By the logic of equivalence relations, we can conclude that a type *is* a module. Only I'd adapt that a little. In C++, the following operators can all be used to access the "module" for some type or value... * :: Scope resolution * . Member dereference * -> Member dereference via a pointer * .* Member-pointer dereference * ->* Member-pointer dereference via a pointer In C++, a type and an instance each have their own modules. A (smart) pointer has its own module, separate from the module for the type it points to. And member-pointers exist because sometimes there's a need to reference a member without knowing or (yet) caring which instance. We already have member pointers - the functions that map an instance to the field value. It would make some sense if these could be placed in a module associated with the type (not the instance). When an instance is created of a type, that can effectively (without run-time overhead) create a new module associated with the new instance. This will contain the same field-access functions, but with the instance parameter already curried in. So there's no real need for any new meaning of the . operator - it's just access to names within a module. And there's no need for a new mechanism for accessing fields - only for a way to place them in that module scope, and a little sugar that gives us the same field-access function but with the instance parameter already curried in. Once we have these modules containing compiler-generated field-access functions, though, it makes some sense to allow additional functions (and perhaps types) to be added within that types module explicitly by the programmer. It may also make sense to allow functions to be explicitly defined which will be added to the instance-modules and support the prefix-instance-parameter sugar. Finally, as with C++, when dealing with IORef and similar, it make make sense to have a separate -> operator (spelled differently, of course). Or it could use the standard dot. C++ and D disagree in this (in C++, the smart pointer has its own module separate from the pointed-at instance - in D, there is no -> or equivalent). As an aside, Ada has already gone through a related transition. The original Ada 83 had variant records, but no "true classes". In Ada 95, "tagged types" were added which were like variant records, but which supported inheritance and run-time dispatch. The discriminant is replaced by a "tag" which is presumably implemented as a virtual table pointer. However, functions and procedures weren't members. The typical call of a "method" would be... packagename.procedure_name ( instance_arg, other_args ); Ada 2005 added some workarounds to allow conventional OOP call notation. See section 1.3 of the Ada 2005 rationale for details. However, it all feels a bit kludgy. In particular, the procedures and functions still aren't members - there are just some special rules for when they can be used as if they were. I've not actually used Ada 2005, but I'd bet some confusion can result from that. Personally, I think Meyer was at least partly right - if types (and instances) are modules, the kludge-factor is much lower. C++ actually doesn't get this quite right IMO (you can access static class members through the instance objects, for example, not just through the classes), but C++ classes *do* act mostly like modules and that is a very useful trait - particularly within the declarative sublanguage (templates etc).

Steve Horne
There's a proposal at the moment to add support for TDNR to Haskell - to leverage "the power of the dot" (e.g. for
intellisense).http://hackage.haskell.org/trac/haskell- prime/wiki/TypeDirectedNameResolution
I approve of the goal, ...
Steve, I think that proposal has been rather superseeded by http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields, which draws on TDNR. But SORF is best seen as an evolving design space, with precise details yet to be clarified/agreed. I've put my own variation into the ring: http://www.haskell.org/pipermail/glasgow-haskell-users/2011- December/021298.html -- which seems to have fallen into a black hole :-( One of the aspects of TDNR that wasn't so popular was that its type-directed resolution was very similar to instance resolution, but subtly and confusingly different. I guess we have to be very careful about the dot. It seems to be in a very 'crowded' syntax space, so if we implement the wrong way, we could end up shutting the door with the keys left inside. SPJ's observations about how the dot works in other languages are all good points. He's arguing that the dot should behave in a familiar way. I'm most used to it in SQL as table.column, but I guess for most programmers it's object.method. Haskell is already encumbered by Module.name, and g . f (function composition with spaces round the dot). I like the part in OverloadedRecordFields (and TDNR) re user-defined 'virtual' fields. (fullName being a concatenation of the datatype fields firstName and lastName, area being a calculation over a Shape datatype.) But the point about those being virtual is that they're not first-class fields: you can't update through them. SPJ got 'stuck' at that point. My proposal was that restricting the dot to field selection wasted too much of the design space. Instead dot should be merely syntactic sugar for reverse function application. That is: whatever.funcmethod ==> (funcmethod whatever) (Note no spaces around the dot. This is syntactically distinct from qualified names because the name to the left of the dot begins lower-case.) Then funcmethod can be a 'real' field selector, or a virtual field or a class method or some other function completely. So to get to name resolution: since dot is (reverse) function application, we can use all the usual Haskell type inference/instance selection 'for free'. Either/both `whatever' and `funcmethod' could be arguments passed through from a distant call, which turned out to be a record type and field selector (not recognisable as such from its name). So we'd get polymorphic record and field selection 'for free'. I'd also like to be able to mix the dot with qualified names: A.b.(C.D.e.f).G.h ==> (G.h ((f C.D.e) A.b)) The syntax rule is: an upper-case name to the left of the dot means this is a qualified name, and binds most tightly. lower-case to the left means reverse- function applic. Of course you can use parentheses to group differently. (Re a one-sided dot I have no intuitions. TDNR includes some options for partial application/sections, SORF some more. They seem to me what Wirth would call 'rococo'. If dot is to be merely function application, it hardly seems worth worrying about.) How do we get field names to be suitable funcmethods for dot applying to records? And how do we support field update? ==> Subjects for a different post. There's also an elephant in the room I haven't talked about: TDNR started with what happens inside an IDE when you type `x.' and all the possible methods (or fields) for x pop up. This follows the philosophy in OO of focus on the object -> look for the action. (Same thinking as right-click in GUI's. Contrast old-style 'green screen' applications where you went down a menu tree first (action), then looked for your object.) If the dot is merely function application, then what follows the dot could be 'anything' (including very generic functions like show or return). I plain don't know if IDE's can be smart enough to spot that what's to the left of the dot is a datatype and offer its fields, or get from its type to its instances to their methods. (Actually, under my proposal, datatype to fields is exactly datatype to Has instance.) (How) could it tell what are more-specific or more- generic methods?
My basic idea is stolen from Bertrand Meyer (Object-Oriented Software Construction, second edition). Basically, a class *is* both a module and a type. ...
1) Are you sure that C++ classes/instances/methods are comparable enough to Haskell's? This is a very confusing area of terminology for object-oriented cp. functional languages. 2) Have you looked at GHC 7.4.1 innovations around classes-as-types and Constraint kinds?

AntC> Steve, I think that proposal has been rather superseeded by AntC> http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields, which AntC> draws on TDNR. But SORF is best seen as an evolving design space, with precise AntC> details yet to be clarified/agreed. I've put my own variation into the ring: AntC> http://www.haskell.org/pipermail/glasgow-haskell-users/2011- AntC> December/021298.html -- which seems to have fallen into a black hole :-( AntC> One of the aspects of TDNR that wasn't so popular was that its type-directed AntC> resolution was very similar to instance resolution, but subtly and confusingly AntC> different. AntC> I guess we have to be very careful about the dot. It seems to be in a AntC> very 'crowded' syntax space, so if we implement the wrong way, we could end up AntC> shutting the door with the keys left inside. AntC> (...) All this dot syntax magic frankly frightens me. Haskell, as a pure functionnal language, requires (and allows !) a programming style that just does not mix well with object oriented practices. Stretching the syntax to have the dot feel a-bit-but-not-really like object oriented programming, mainly to have IDE autocompletion on some cases, does not make much sens. If the editor matters - and it probably does -, we could rather take a more ambitious path, and work on a real semantic editor, as opposed to a plain left-to-right text editor, with hacked semantic goodies to alleviate the pain. Indeed, very often in haskell, we just don't think code left to right, or top to bottom. Emacs ability to move point quickly certainly helps, but I'd surely welcome a drastic switch, to something allowing us to work directly on type-checked syntax trees. -- Paul

AntC> Steve, I think that proposal has been rather superseeded by AntC> http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields, which AntC> draws on TDNR. But SORF is best seen as an evolving design space, with precise AntC> details yet to be clarified/agreed. I've put my own variation into the ring: AntC> http://www.haskell.org/pipermail/glasgow-haskell-users/2011- AntC> December/021298.html -- which seems to have fallen into a black hole :-(
AntC> One of the aspects of TDNR that wasn't so popular was that its type-directed AntC> resolution was very similar to instance resolution, but subtly and confusingly AntC> different.
AntC> I guess we have to be very careful about the dot. It seems to be in a AntC> very 'crowded' syntax space, so if we implement the wrong way, we could end up AntC> shutting the door with the keys left inside.
AntC> (...)
All this dot syntax magic frankly frightens me. Haskell, as a pure functionnal language, requires (and allows !) a programming style that just does not mix well with object oriented practices. Stretching the syntax to have the dot feel a-bit-but-not-really like object oriented programming, mainly to have IDE autocompletion on some cases, does not make much sens. That's a benefit of my idea. Modular programming used the dot long before OOP became popular - OOP stole the dot from modular programming! If a record is a module, that only means that one thing can be both a module and a type (or value) at the same time. It takes little from OOP
On 28/01/2012 13:00, Paul R wrote: that OOP didn't already take from the more fundamental modular programming - and Haskell already has modules.
If the editor matters - and it probably does -, we could rather take a more ambitious path, and work on a real semantic editor, as opposed to a plain left-to-right text editor, with hacked semantic goodies to alleviate the pain. Every programmer has their own favorite editor, usually using the same one to work in many different languages. For the moment, you'd have a hard job separating me from Notepad++.
If you really want a "semantic editor", I'd argue a rich visual language with a file format that isn't intended to be read directly. Something more like writing in Word than writing in TeX. But I don't think most programmers are ready for this, for various reasons. Version control tools and readable differences get a place near the top of that list.

On 30/01/2012 04:23, Steve Horne wrote:
AntC> Steve, I think that proposal has been rather superseeded by AntC> http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields, which AntC> draws on TDNR. But SORF is best seen as an evolving design space, with precise AntC> details yet to be clarified/agreed. I've put my own variation into the ring: AntC> http://www.haskell.org/pipermail/glasgow-haskell-users/2011- AntC> December/021298.html -- which seems to have fallen into a black hole :-(
AntC> One of the aspects of TDNR that wasn't so popular was that its type-directed AntC> resolution was very similar to instance resolution, but subtly and confusingly AntC> different.
AntC> I guess we have to be very careful about the dot. It seems to be in a AntC> very 'crowded' syntax space, so if we implement the wrong way, we could end up AntC> shutting the door with the keys left inside.
AntC> (...)
All this dot syntax magic frankly frightens me. Haskell, as a pure functionnal language, requires (and allows !) a programming style that just does not mix well with object oriented practices. Stretching the syntax to have the dot feel a-bit-but-not-really like object oriented programming, mainly to have IDE autocompletion on some cases, does not make much sens. That's a benefit of my idea. Modular programming used the dot long before OOP became popular - OOP stole the dot from modular
On 28/01/2012 13:00, Paul R wrote: programming! If a record is a module, that only means that one thing can be both a module and a type (or value) at the same time. It takes little from OOP that OOP didn't already take from the more fundamental modular programming - and Haskell already has modules.
Sorry for replying to myself - I just thought I could explain this better. I'm basically asserting that a record in standard Pascal (without any of that OOP Turbo Pascal 5.5+/Delphi stuff) is a module. It doesn't matter that the only names that can be held in that module are field names - it's still a container of named items and therefore a special case of a module. In the Pascal case (like C structs), the content of the module doesn't include functions or methods or whatever, it only includes fields. And the module is only accessible via the record instances, not via the record type (there's nothing like C++ member pointers). Converting this to Haskell - well, we already use field-access functions, so why not move those to the record-instance module instead of having them pollute some existing namespace? Since naming the same thing twice (once to identify the module, once to specify the instance parameter) would be annoying, why not auto-curry that parameter? The result is still a function living in a module. And rather than lose the original function, why not move that to another scope - a module that's associated with the record type rather than the record instance? If you don't specify an instance, you can't curry that parameter - it still makes sense. There's no inheritance here, no virtual functions, no OOP features at all - just Pascal-like records adapted for immutability by supplying a field access function rather than e.g. a field offset. The function placed in the record-type module would be the exact same function we get now, just in a different scope. However, once you have the idea that a record is a module, maybe it makes sense to put some other functions in there too? As a minimal solution no, but it's nice to know there's room for future expansion. There's nothing OOP about this at all - it's really just adapting and extending what standard Pascal does. You could extend it to include OOP if you really wanted to, but the minimal solution just moves the existing Haskell access functions to another scope, and adds a pre-curried version in a further scope, associating those scopes with the record type and record instances respectively.

On 28/01/2012 13:00, Paul R wrote: ...
All this dot syntax magic frankly frightens me. Haskell, as a pure functionnal language, requires (and allows !) a programming style that just does not mix well with object oriented practices. Stretching the syntax to have the dot feel a-bit-but-not-really like object oriented programming, mainly to have IDE autocompletion on some cases, does not make much sens.
In the glasgow-haskell-users discussion, it has been pointed out (to little apparent effect) that the current notation for access by field name, `field record', is naturally functional and is easier to read for a functionally trained eye than a postfix `record.field' alternative. It isn't so much of an issue for OO programmers because the languages are also procedural and the expressions tend to be simpler. In a language like Haskell, an expression could switch back and forth several times between pre-fix (functional) and post-fix (dot) notation. Like, `yolk (separate (crack (largeEnd egg)))' becomes `(separate (crack (egg.smallEnd))).yolk' That elementary example doesn't give me much trouble, but it sure doesn't seem to be much of an improvement in notational elegance. See how natural the transformation with function composition - yolk (separate (crack (largeEnd egg))) yolk ((separate . crack . largeEnd) egg) yolk (f egg) where f = separate . crack . largeEnd ... compared to the re-shuffing necessary with post-fix dot notation (assuming for the sake of discussion a functional dot notation .field = \ r -> r.field) (separate (crack (egg.smallEnd))).yolk ((separate . crack . .smallEnd) egg).yolk (f egg).yolk where f = separate . crack . .smallEnd Donn

On 30/01/2012 07:09, Donn Cave wrote:
((separate . crack . .smallEnd) egg).yolk (f egg).yolk where f = separate . crack . .smallEnd
Scary - that ".smallEnd" worries me. It's like a field is being referenced with some magical context from nowhere. Obviously I need to read that full proposal. Sorry for going on about it, but this wouldn't happen in my idea. If field access functions are just items in a module, the . separates the module identification from the item name, same as always. The only difference is how you identify the module. There's no special interactions with function composition, or whatever it is that's happening here. If you want to composite the function with some other function without knowing in advance which record value you're dealing with, just get the access function from the record-type module. If I'm correctly guessing what your code means, that reads as... (f egg).yolk where f = separate . crack . (eggModule.eggType.smallEnd) OK, in a sense specifying "eggModule.eggType" there is a bit redundant, but in general that isn't true - define f separately and it needs some clue for the type inference to decide where to look for "smallEnd", and "eggtype" provides it. I'd prefer a notation that allows me to reference the field without needing type inference to determine which record type it relates to. But then again, I'm probably just not understanding the reason behind that wierdness - perhaps it wouldn't seem so wierd if I did. Or maybe it's just a familiarity issue. First thought - I've not addressed access from within a polymorphic function with type class constraints. The situation there would (without extra features) be the same as it is now, with no TDNR support. Field access functions would have to be provided as explicit operations within the type class. That said, it shouldn't be hard to handle. For example, a type class can explicitly state which field names it is interested in, and an instance can provide functions to access those fields. Alternatively, the instance might support using arbitrary functions (of the right type). This might allow some wierdness (fields that aren't really fields), but it may allow some useful flexibility (this particular type provides the field "daddy", that type provides "mummy", a third type has no named fields but has a function that works by pattern matching that can provide the positionally-defined field - either way, this type class will refer to "parent") so that polymorphic functions can use the dot notation, but the relationship between fields in the type class and fields in the instance type are flexible. It's really just syntactic sugar for what type classes have to do now - providing a dot notation, but still using vtable references to field access functions to do the work.

Quoth Steve Horne
On 30/01/2012 07:09, Donn Cave wrote:
((separate . crack . .smallEnd) egg).yolk (f egg).yolk where f = separate . crack . .smallEnd
Scary - that ".smallEnd" worries me. It's like a field is being referenced with some magical context from nowhere.
Obviously I need to read that full proposal.
As I said:
(assuming for the sake of discussion a functional dot notation .field = \ r -> r.field)
By that, I meant to say that I just made that up. I am sure that various proposals have made some notational provision for `\ r -> r.field', but it may or may not be `.field', I don't recall. But that's all the magic there is to it. Either you have a notational shorthand for it, or you're obliged to write out `\ r -> r.field' Donn

Donn Cave
On 28/01/2012 13:00, Paul R wrote: ...
All this dot syntax magic frankly frightens me. Haskell, as a pure functionnal language, requires (and allows !) a programming style that just does not mix well with object oriented practices.
In the glasgow-haskell-users discussion, it has been pointed out (to little apparent effect) that the current notation for access by field name, `field record', is naturally functional and is easier to read for a functionally trained eye than a postfix `record.field' alternative. [snip] Donn
Donn, I can see the argument "Haskell has never been afraid to be different. Just because OO does it like that, so what?" But if you read SPJ's discussion in the TDNR proposal, there's "a cultural connection to OO". My post at the head of this thread put it as "focus on the object -> look for the action". Of course it's easy to 'fake' postfix function application: (.$) = flip ($) But the trouble is that .$ binds weakly. What we want is for the dot to bind tighter even than function apply. So: crack egg.largeEnd ==> crack (largeEnd egg) (Where ==> means 'is syntactic sugar for'.) We're already familiar with the tight-binding dot for qualified names. I suppose we're coping with the visual confusion with space-surrounded dot as function composition. But I can see that "one more petit bonbon" could tip confusion over the edge. To my eye, one-sided dot application is a bonbon too far. My proposal is that field selection functions be just ordinary functions, and dot notation be just function application(tight-binding). Then: object.fieldfuncmethod ==> fieldfuncmethod object (Subject to the tight binding for the dot.) And one-sided dot application is pointless (! errk I mean 'without purpose', no different to writing the bare object or bare fieldfuncmethod). Then you can write in your familiar style, and can use polymorphic field selectors as plain functions (same syntax as presently). Those under the influence of OO can write dot notation, until they discover the joys of pointless style. AntC

Quoth AntC
My proposal is that field selection functions be just ordinary functions, and dot notation be just function application(tight-binding). Then: object.fieldfuncmethod ==> fieldfuncmethod object (Subject to the tight binding for the dot.) And one-sided dot application is pointless (! errk I mean 'without purpose', no different to writing the bare object or bare fieldfuncmethod).
That's interesting! The wiki page on SORF (Simple Overloaded Record Fields, http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields) has some language that, as I interpreted it, meant that Has/Get syntactic sugar depended on the dot, so it's indispensable. Your proposal actually has some similar language but, I see you don't mean it that way. That's great news for anyone who's really dying to get somewhere with records, if it means that the functionality could in principle be introduced independently of changes to the interpretation of "." that would break a lot of code. Donn

Donn Cave
Quoth AntC
, ... My proposal is that field selection functions be just ordinary functions,
dot notation be just function application(tight-binding). Then: object.fieldfuncmethod ==> fieldfuncmethod object (Subject to the tight binding for the dot.) And one-sided dot application is pointless (! errk I mean 'without
and purpose',
no different to writing the bare object or bare fieldfuncmethod).
That's interesting! The wiki page on SORF (Simple Overloaded Record Fields, http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields) has some language that, as I interpreted it, meant that Has/Get syntactic sugar depended on the dot, so it's indispensable.
Yes it does, and that's one of the things I didn't like - hence my counter- proposal. In particular in SORF, the dot notation got tied into 'virtual record selectors'. Now 'virtual record selectors' is a good idea, but SORF tied it to the field selection approach, so had to go via a Has instance, which introduced a `set' method as well as the get, which didn't make sense, so SPJ ran into trouble. Actually the TDNR proposal was better re the "power of the dot": "works with any function". As soon as you decide to make 'virtual record selectors' just ordinary functions (so they select but not update), then you can see that field names are also just ordinary functions (for selection purposes). So the semantics for field 'selection' (whether or not you use dot notation) is just function application. So Type-Directed Name resolution is just instance resolution. So it all gets much easier.
Your proposal actually has some similar language but, I see you don't mean it that way. That's great news for anyone who's really dying to get somewhere with records, if it means that the functionality could in principle be introduced independently of ...
Yes. Actually, (IMHO) the biggest block to making some progress with the 'cottage industry' for records (and there are heaps of ideas out there) is that currently the field name appearing in data decls grabs so much of the namespace real estate. It creates a global name (selector function) that can't be overloaded. You'll see in my other posts last night (NZ time) that the first thing I think should happen is to switch off auto-creation of field selection functions. (This should have come along as an option with DisambiguateRecordFields, I think. http://www.haskell.org/pipermail/glasgow-haskell-users/2012- January/021750.html)
... changes to the interpretation of "." that would break a lot of code.
Yes, in principle we could introduce the semantics for field-selectors-as- overloaded-functions without introducing any special syntax for field selection (dot notation or whatever). But the 'Records in Haskell' thread started with a Reddit/Yesod discussion about records, and the lack of dot notation being the last major wart in Haskell. "A sentiment open to doubt" in the words of the poet. It stung SPJ enough to open up the discussion (and I guess now is timely as 7.4.1 gets put to bed). For me, the record/field namespacing is the major wart, polymorphism only slightly less, and the notation is a side-issue. But I don't want to lose the initiative that's built up, so I'm trying to address both at the same time. AntC

On 1/02/2012, at 11:38 AM, AntC wrote:
As soon as you decide to make 'virtual record selectors' just ordinary functions (so they select but not update), then you can see that field names are also just ordinary functions (for selection purposes). So the semantics for field 'selection' (whether or not you use dot notation) is just function application. So Type-Directed Name resolution is just instance resolution. So it all gets much easier.
I'm reminded of Pop-2, where f(x) and x.f meant exactly the same thing. Overloading was a (dynamic) property of f, not a property of dot. Ada had two reasons for adding dot syntax, and much as I admire Ada, I'm not sure that I agree with either of them. One was to be more familiar to programmers from other languages, but since there remain interesting differences between x.f in Ada and x.f in other languages, it's not clear to me how much of a kindness that really is. The other is that x.f means basically what f(x) would have, *had f(x) been legal*; the aim was to be able to use methods without having to important everything from a module. Now that might have some relevance to Haskell. Making f x and x.f the same is pretty appealing, but it is imaginable that the former might require importing the name of f from a module and the latter might not. That is to say, it lets f and .f have completely different meanings. Oh the joy! Oh the improved readability! -- on some other planet, maybe.

Steve> Every programmer has their own favorite editor, usually using the same Steve> one to work in many different languages. For the moment, you'd have Steve> a hard job separating me from Notepad++. Main editors have very advanced customization features (though incredibly hacky most of the time). A type-directed (this word is what I like most in the proposal ;]) Haskell editing mode for them could be a good first step. Steve> If you really want a "semantic editor", I'd argue a rich visual Steve> language with a file format that isn't intended to be read directly. Steve> Something more like writing in Word than writing in TeX. But I don't Steve> think most programmers are ready for this, for various reasons. Steve> Version control tools and readable differences get a place near the Steve> top of that list. Well, in the long term I don't know ... maybe plain text won't be a good representation of a program anymore. But in the short term that's not an option. However, I see no problem in just constructing this textual representation through a strictly type-directed (yeah!) syntax tree editor. Emacs, Vim, and a lot of others have "snippet" support. The workflow could be something like : - action to create a new top-level function - PROMPT : name (eg. map) - PROMPT : signature (eg. (a -> b) -> [a] -> [b]) - PROMPT : parameters matching (eg. f (x:xs)) - a stub is inserted with name, signature, and "undefined" definition map :: (a -> b) -> [a] -> [b] map f (x:xs) = undefined map f [] = undefined - now enters definition construction. You would start by adding to a 'pool' the bindings you want to combine. Some could be added to the pool automatically (function parameters, top-level definitions in the module, explicitly imported functions ...). Then you would pick one of them, and the type-directed system would offer type-correct completion. For example : - The pool state is { f :: a -> b , x :: a, xs :: [a], map :: (a -> b) -> [a] -> [b] } (the type variables scope over the entire pool) - The definition type must be [b] - I ask to use 'f'. It isn't [b] so I can't use it alone. The wonderful system would then reduce the pool to things that I can combine f with in order to keep the target [b] type. The result is { map f xs :: [b] }. I say ok. - The sub is now : map :: (a -> b) -> [a] -> [b] map f (x:xs) = map f xs map f [] = undefined - Now I ask to use (:) :: c -> [c] -> [c] . They are plenty of places where it could be used in the definition, so let's narrow the choice by associating the 'c' type to something in our expression : c == b. So (:) :: b -> [b] -> [b] - we have no expression typing as 'b' in the definition, but we have a single expression that types as [b], and it is 'map f xs'. So the system can safely offer : map :: (a -> b) -> [a] -> [b] map f (x:xs) = undefined : map f xs map f [] = undefined - now let's define the first 'undefined'. Its type is b. We ask this time to use the 'x' binding (x :: a). But we are looking for a 'b'. We have f :: a -> b so the system can offer 'f x'. map :: (a -> b) -> [a] -> [b] map f (x:xs) = f x : map f xs map f [] = undefined - The last undefined is trivial. The user interface would certainly matter much, to have a fast and pleasant experience. But the point is that as a pure language, haskell very looks well suited for this kind of incremental syntax-tree editing, with type-directed assistance. I wonder, even, if some rules could be defined to construct automatically a definition that typechecks and use all bindings in the pool :) Back to the point of the thread, it looks like we certainly can target type-directed editing with current haskell function notation, which has the advantage of being beautiful and consistent. -- Paul

Steve Horne
There's a proposal at the moment to add support for TDNR to Haskell - to leverage "the power of the dot" (e.g. for intellisense).
http://hackage.haskell.org/trac/haskell-prime/wiki/TypeDirectedNameResolutio...
I'm not sure whether this should really be a language feature. A smart editor together with compiler support can do this without language extensions. The basic problem is that without the dot style you write the function before you write its argument. For an intellisense-like feature you need to write the argument before you write the function. Now in a smart editor you could write "x.", at which point the editor could examine the source file to find the actual type of 'x' as well as the expected type of the spot where you are currently writing. Once it has built a list of suitable functions, it could rewrite the "x." to "x", place the cursor in front of it and let you browse the list of suggestions: x._ -> [suggestions]_ x An even smarter editor could provide something like agda-mode's hole feature. In Agda you can write "f ?", at which point agda-mode replaces the question mark by a hole. You can then ask for the type of the term that goes into the hole as well as try to infer the value. Agda-mode doesn't provide you with a list of suggestions, but in Haskell with type inference this could certainly be possible. I would prefer holes over dot-application. Greets, Ertugrul -- nightmare = unsafePerformIO (getWrongWife >>= sex) http://ertes.de/
participants (6)
-
AntC
-
Donn Cave
-
Ertugrul Söylemez
-
Paul R
-
Richard O'Keefe
-
Steve Horne