
While browsing Haskell-Prime I found this: http://hackage.haskell.org/trac/haskell-prime/wiki/TypeDirectedNameResolutio... This is not some April Fool's day hoax? Because, it might actually turn Haskell into a somewhat usable (and marketable) language ... well, you know what I mean. Is there 'ghc -XTypeDirectedNameResolution' yet? - J.W.

Hello Johannes, Monday, July 27, 2009, 7:58:11 PM, you wrote:
While browsing Haskell-Prime I found this: http://hackage.haskell.org/trac/haskell-prime/wiki/TypeDirectedNameResolutio...
haskell-prime is future haskell standard now in development and on this wiki anyone can write his proposals without any actual implementation -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

2009/7/27 Johannes Waldmann
While browsing Haskell-Prime I found this: http://hackage.haskell.org/trac/haskell-prime/wiki/TypeDirectedNameResolutio...
This is not some April Fool's day hoax? Because, it might actually turn Haskell into a somewhat usable (and marketable) language ... well, you know what I mean.
You would be arguing against it then? ;)
Is there 'ghc -XTypeDirectedNameResolution' yet?
I'm pretty sure there is not. I don't really care for this proposal because it involves yet more overloading of (.) which rather annoys me. Composition is by far the most common infix operator in my code, and I consider its choice for module qualification a minor, but irritating mistake. (It makes composition chains with qualified names just that much harder to scan with your eyes.) There was a great related idea on #haskell the other day: Make explicit qualification unnecessary whenever there is a *unique* choice of module qualifications from those imported which would make the expression typecheck. Ambiguities would still need to be qualified, but I feel that this would eliminate 99% of all ugly qualified names from code. It would be especially good in the case of infix operators, which as far as I know, nobody actually enjoys qualifying explicitly. I can see some corner cases where this might lead to a combinatorial explosion of possible qualifications to try, however, I have a feeling that such cases wouldn't really happen in practice. It is admittedly a bit like a very restricted sort of ad-hoc polymorphism, but given that the modules are compiled separately, I think the actual interactions with the workings of the type system are not really very significant. (It's just like you tried each set of possible qualifications, compiling the whole thing each time, and checking to see if there's a unique way to get the module to compile.) This would mean that if we had, say, Data.List, Data.Map and Data.Set imported, and there was an occurrence of insert that happened to be applied to a couple of values and then something known to be a Map, it would behave as if you'd written Data.Map.insert, because that's the only thing which could possibly make sense. If there were ambiguity about which insert you meant, it would still be an error, and you might have to qualify it explicitly. What do people think of this idea? Personally, it really annoys me whenever I'm forced to give explicit module qualifications, and I think this would really help. It would also subsume the DisambiguateRecordFields extension rather handily. - Cale

Cale Gibbard wrote:
What do people think of this idea? Personally, it really annoys me whenever I'm forced to give explicit module qualifications, and I think this would really help. It would also subsume the DisambiguateRecordFields extension rather handily.
A disadvantage - and this is not a "No" vote, just a remark - is that when trying to debug the expression: foo bar baz quux if I type ":t bar" I will presumably get an ambiguity error, and I may have no easy way of working out *which* bar was actually intended in this line of code. I don't know how much of a burden this is, but it feels like a burden to writing/debugging/understanding code. Jules

2009/7/27 Jules Bean
Cale Gibbard wrote:
What do people think of this idea? Personally, it really annoys me whenever I'm forced to give explicit module qualifications, and I think this would really help. It would also subsume the DisambiguateRecordFields extension rather handily.
A disadvantage - and this is not a "No" vote, just a remark - is that when trying to debug the expression:
foo bar baz quux
if I type ":t bar" I will presumably get an ambiguity error, and I may have no easy way of working out *which* bar was actually intended in this line of code.
I don't know how much of a burden this is, but it feels like a burden to writing/debugging/understanding code.
Jules
There certainly do seem like some cases where it would help the person reading the code to qualify which module you meant, so clearly if it's not very obvious which selection of modules produces the unique way to get things to typecheck, that's not very good. Perhaps there should at least be the restriction that there must exist a chain of individual choices made where there was a unique possibility at each step. This ensures that you never have to backtrack in deciding which modules things are intended to come from. Of course, in cases where it's still not obvious, it'd still be possible to make the qualification explicit. The goal is to eliminate the need to explicitly qualify in the cases where it's entirely obvious what the qualification should be. - Cale

I've spoken in favor of this many times before. But there are many who think, "Every function you write should have a unique name." Talk about needless clutter. Regards, John A. De Goes N-Brain, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101 On Jul 27, 2009, at 10:29 AM, Cale Gibbard wrote:
What do people think of this idea? Personally, it really annoys me whenever I'm forced to give explicit module qualifications, and I think this would really help. It would also subsume the DisambiguateRecordFields extension rather handily.

On Mon, Jul 27, 2009 at 9:29 AM, Cale Gibbard
2009/7/27 Johannes Waldmann
: While browsing Haskell-Prime I found this:
http://hackage.haskell.org/trac/haskell-prime/wiki/TypeDirectedNameResolutio...
This is not some April Fool's day hoax? Because, it might actually turn Haskell into a somewhat usable (and marketable) language ... well, you know what I mean.
You would be arguing against it then? ;)
Is there 'ghc -XTypeDirectedNameResolution' yet?
I'm pretty sure there is not.
I don't really care for this proposal because it involves yet more overloading of (.) which rather annoys me. Composition is by far the most common infix operator in my code, and I consider its choice for module qualification a minor, but irritating mistake. (It makes composition chains with qualified names just that much harder to scan with your eyes.)
There was a great related idea on #haskell the other day: Make explicit qualification unnecessary whenever there is a *unique* choice of module qualifications from those imported which would make the expression typecheck. Ambiguities would still need to be qualified, but I feel that this would eliminate 99% of all ugly qualified names from code. It would be especially good in the case of infix operators, which as far as I know, nobody actually enjoys qualifying explicitly.
My biggest fear is that of usability. If I understand you correctly, then as you change module imports you change the meaning of the code in potentially non-obvious ways. So this isn't too different than using unqualified imports and flipping between two modules that export the same function. Except that as you increase the 'automatic'ness of it, it has the potential to trip up people. I think what is the worse case is, you add a module import and suddenly you have an error in code that was previously compiling. Suddenly the auto-disambiguate stops working in a chunk of code that was fine before. When working with code that others have written this would confuse me. I think I would then start using qualified imports everywhere just to "work around" this feature :) Yes, I realize it would be an extension, but it would be an extension that I suspect I would try to avoid.
What do people think of this idea? Personally, it really annoys me whenever I'm forced to give explicit module qualifications, and I think this would really help. It would also subsume the DisambiguateRecordFields extension rather handily.
I like explicit module qualifications more or less. I certainly don't mind reading them and I don't care if it takes 2 seconds longer to type something. You asked for my thoughts and there they are :) Jason

2009/7/27 Jason Dagit
On Mon, Jul 27, 2009 at 9:29 AM, Cale Gibbard
wrote: 2009/7/27 Johannes Waldmann
: While browsing Haskell-Prime I found this:
http://hackage.haskell.org/trac/haskell-prime/wiki/TypeDirectedNameResolutio...
This is not some April Fool's day hoax? Because, it might actually turn Haskell into a somewhat usable (and marketable) language ... well, you know what I mean.
You would be arguing against it then? ;)
Is there 'ghc -XTypeDirectedNameResolution' yet?
I'm pretty sure there is not.
I don't really care for this proposal because it involves yet more overloading of (.) which rather annoys me. Composition is by far the most common infix operator in my code, and I consider its choice for module qualification a minor, but irritating mistake. (It makes composition chains with qualified names just that much harder to scan with your eyes.)
There was a great related idea on #haskell the other day: Make explicit qualification unnecessary whenever there is a *unique* choice of module qualifications from those imported which would make the expression typecheck. Ambiguities would still need to be qualified, but I feel that this would eliminate 99% of all ugly qualified names from code. It would be especially good in the case of infix operators, which as far as I know, nobody actually enjoys qualifying explicitly.
My biggest fear is that of usability.
If I understand you correctly, then as you change module imports you change the meaning of the code in potentially non-obvious ways. So this isn't too different than using unqualified imports and flipping between two modules that export the same function. Except that as you increase the 'automatic'ness of it, it has the potential to trip up people.
Well, yes, but only insofar as you can already cause that to happen. Simply adding a new module import might force you to qualify some names, as you mention below (and as it can already force you to do), but will never cause the meaning to otherwise change, since it's not doing something like picking the first module which works (which would really be bad). It's only fixing the qualification in places where there's *exactly one* qualification that would work. Similarly, removing a module import will not cause the meaning to change, supposing that the module still compiles, and if you can do that, you can still be certain that nothing from that module was being used. Of course, put the two together in one step, and you can change semantics arbitrarily, but that's already the case, and I don't think that's necessarily something to be avoided.
I think what is the worse case is, you add a module import and suddenly you have an error in code that was previously compiling. Suddenly the auto-disambiguate stops working in a chunk of code that was fine before. When working with code that others have written this would confuse me. I think I would then start using qualified imports everywhere just to "work around" this feature :)
You can already import a module and suddenly need to disambiguate expressions where a name gets used. This just relieves you of that responsibility sometimes. Consider the case where you have a module with a bunch of uses of stuff from Data.List and the Prelude, and you decide that you need Data.Map for a new function. If you import Data.Map, all of a sudden, you need to qualify half of the Prelude and Data.List stuff or it would be ambiguous. (Or import Data.Map qualified.)
Yes, I realize it would be an extension, but it would be an extension that I suspect I would try to avoid.
What do people think of this idea? Personally, it really annoys me whenever I'm forced to give explicit module qualifications, and I think this would really help. It would also subsume the DisambiguateRecordFields extension rather handily.
I like explicit module qualifications more or less. I certainly don't mind reading them and I don't care if it takes 2 seconds longer to type something. You asked for my thoughts and there they are :)
Well, thanks :) I do think some thought has to be put into limiting the power of it, so that we don't end up with situations where only a computer could figure out what's going on. - Cale

Cale et al, I have a concern about the implementation of the proposed TypeDirectedNameResolution. (I'm not familiar with the internals of any of the compilers, so it could be that my concern isn't well founded.) I'm assuming that name resolution is currently independent of type inference, and will happen before type inference. With the proposal this is no longer true, and in general some partial type inference will have to happen before conflicting unqualified names are resolved. My worry is that the proposal will require a compliant compiler to interweave name resolution and type inference iteratively. Give the Haskell source code: import X import Y import Z a = 1 + (1 :: Integer) b = x a c = y b d = z c . . . Assume X, Y, and Z all export x, y, and z. The compiler might do something like this: (do stuff) Infer type of 'a' Resolve unqualified use of 'x' in "x a" Infer type of 'b' from "b = x a" Resolve unqualified use of 'y' in "y b" Infer type of 'c' from "c = y b" Resolve unqualified use of 'z' in "z c" etc. If ambiguous unqualified names are used mutually recursively it may be that there's only one reasonable combination of name resolutions, but is this decidable? To my untrained eye it looks complicated and invasive, even without the mutually recursive case. Can anyone shed light on whether this would be a problem for, say, GHC? Thanks, John

I would find a third meaning for dot in Haskell just a little bit too many. Especially with hierarchical modules, Haskell encourages writing small modules, or at any rate no larger than they have to be. (SML goes further, of course.) So if we're doing what the software engineering books say (small highly cohesive loosely coupled modules) we should have that much of a name overloading problem in the first place, no? The alias facility for imports, and the practice that has sprung up of using single-letter abbreviations for modules, make the existing practice fairly light-weight. I'm not sure, therefore, that we have a problem that OUGHT to be "solved", still less by a method that I find less readable. It seems to me that there's an approach that could be swiped from ML. One of the declaration forms in ML is local <private declarations> in <public declarations> end Just as "where" lets you share a bunch of definitions across a bunch of clauses, so "local" makes everything in the <private declarations> visible inside the <public declarations> but not outside; from the outside only the <public declaration> can be seen. Waving my hands a bit, this is roughly equivalent to (x1,...,xn) = let <private declarations> in let <public declarations> in (x1,...,xn) What has this to do with imports? It means that you can introduce local renamings that can be _shared_ by a group of declarations, while elsewhere in the module, another group of declarations can share different resolutions for the same names. The advantages of that over type directed resolution include - what's happening is visible to the programmer - it's basically ordinary declarations, not some new constraint satisfaction process that has to be interwoven somehow with type inference - that makes life easier for other tools as well - it doesn't overload dot (or my small head)

On 28/07/2009, at 6:41 AM, John Dorsey wrote:
I'm assuming that name resolution is currently independent of type inference, and will happen before type inference. With the proposal this is no longer true, and in general some partial type inference will have to happen before conflicting unqualified names are resolved.
My worry is that the proposal will require a compliant compiler to interweave name resolution and type inference iteratively.
To my untrained eye it looks complicated and invasive, even without the mutually recursive case. Can anyone shed light on whether this would be a problem for, say, GHC?
My experimental compiler DDC [1] implements TDNR almost exactly as given on the Haskell' wiki. Yes, you have to interweave name resolution with type inference, because there is no way to compute the binding dependency graph/call graph before type inference proper. This is discussed in section 3.5 of my thesis [2] (which is currently under examination). For DDC I used a constraint based inference algorithm to compute the binding dependency graph "on the fly", but I don't know how easy it would be to retrofit this method into GHC. Cheers, Ben. [1] http://www.haskell.org/haskellwiki/DDC [2] http://cs.anu.edu.au/people/Ben.Lippmeier/project/thesis/thesis-lippmeier-su...

On Mon, Jul 27, 2009 at 04:41:37PM -0400, John Dorsey wrote:
I'm assuming that name resolution is currently independent of type inference, and will happen before type inference. With the proposal this is no longer true, and in general some partial type inference will have to happen before conflicting unqualified names are resolved.
My worry is that the proposal will require a compliant compiler to interweave name resolution and type inference iteratively.
Indeed. This is my concern too. I can't see any way to do implement it in jhc at least without some major hassle. Name Resolution occurs several stages before typechecking, even before desugaring, having to intertwine it with type checking would be a complicated affair to say the least. John -- John Meacham - ⑆repetae.net⑆john⑈ - http://notanumber.net/

John Meacham wrote:
On Mon, Jul 27, 2009 at 04:41:37PM -0400, John Dorsey wrote:
I'm assuming that name resolution is currently independent of type inference, and will happen before type inference. With the proposal this is no longer true, and in general some partial type inference will have to happen before conflicting unqualified names are resolved.
My worry is that the proposal will require a compliant compiler to interweave name resolution and type inference iteratively.
Indeed. This is my concern too. I can't see any way to do implement it in jhc at least without some major hassle. Name Resolution occurs several stages before typechecking, even before desugaring, having to intertwine it with type checking would be a complicated affair to say the least.
You can still resolve the names first, while keeping the ambiguity: data Expr = ... | OverloadedVar [UniqueId] -- after name resolution Then the type checker checks all possible overloads, and in the end only one variable reference is left. TDNR would still complicate the typechecker, since it suddenly needs to do backtracking. Twan

about qualified imports and TDNR: for x.f to work (as in the proposal), the name f must be in scope (that is, be imported unqualified)? That would be bad (unqualified imports should be discouraged). In Java, the methods of a type are automatically in scope, e.g., the ".bitCount()" works without any "import": System.out.println ( java.math.BigInteger.TEN.bitCount() ); see JLS 15.12.1 item 1.3 ("in all other cases...") J.W.

Jason Dagit
My biggest fear is that of usability.
If I understand you correctly, then as you change module imports you change the meaning of the code in potentially non-obvious ways. So this isn't too different than using unqualified imports and flipping between two modules that export the same function. Except that as you increase the 'automatic'ness of it, it has the potential to trip up people.
My biggest fear is seeing it actually implemented as a language switch, resulting in chaotic inferance behaviour as soon as imports change. I think the Right Place to do this is on the editor/ide level: Help the programmer to use the right function by scanning through types, leave the display uncluttered (but make precise information about what function is referenced easily available) and save the source code fully qualified. Source code is not only made for human consumption but also as fodder for batch processes, re-compilation with other compilers etc, so we shouldn't put anything into it that can break way too easily, be it because type inferance changes in subtle ways or some library merely happens to export an additional function. Thus is the way of Malbolge, be ye warned, ye who construeth isomorphic brainb0rkz0rage.
| Book 4 - Coding | | Thus spake the master programmer: | | ``A well-written program is its own heaven; a poorly-written program | is its own hell.'' | [...] | A program should follow the `Law of Least Astonishment'. What is this | law? It is simply that the program should always respond to the user | in the way that astonishes him least. | | A program, no matter how complex, should act as a single unit. The | program should be directed by the logic within rather than by outward | appearances. | | If the program fails in these requirements, it will be in a state of | disorder and confusion. The only way to correct this is to rewrite | the program. | [...]
etc. pp. qed. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

Cale Gibbard wrote:
There was a great related idea on #haskell the other day: Make explicit qualification unnecessary whenever there is a *unique* choice of module qualifications from those imported which would make the expression typecheck. Ambiguities would still need to be qualified, but I feel that this would eliminate 99% of all ugly qualified names from code. It would be especially good in the case of infix operators, which as far as I know, nobody actually enjoys qualifying explicitly.
[...]
What do people think of this idea? Personally, it really annoys me whenever I'm forced to give explicit module qualifications, and I think this would really help. It would also subsume the DisambiguateRecordFields extension rather handily.
I think this idea would severely damage compositionality. One example of this is that it would make it substantially less likely that subexpressions could be abstracted into a separate declaration without giving a type signature to fix the type of the new declaration. Ganesh =============================================================================== Please access the attached hyperlink for an important electronic communications disclaimer: http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html ===============================================================================

Sittampalam, Ganesh wrote:
... it would make it substantially less likely that subexpressions could be abstracted into a separate declaration without giving a type signature to fix the type of the new declaration.
OK, then give a type signature to fix the type of (really, to document) the new declaration. I can't understand why declarative programmers insist they should be able to omit (type) declarations ... Best, J.W.

(To be clear, this about Cale's proposal, not simonpj's one) Johannes Waldmann wrote:
Sittampalam, Ganesh wrote:
... it would make it substantially less likely that subexpressions could be abstracted into a separate declaration without giving a type signature to fix the type of the new declaration.
OK, then give a type signature to fix the type of (really, to document) the new declaration.
I can't understand why declarative programmers insist they should be able to omit (type) declarations ...
I find type inference a valuable feature. Generally at some point I annotate top-level declarations, but while developing it's nice not to have to worry about it. Having to annotate every single declaration would be painful. Ganesh =============================================================================== Please access the attached hyperlink for an important electronic communications disclaimer: http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html ===============================================================================

2009/7/28 Sittampalam, Ganesh
Cale Gibbard wrote:
There was a great related idea on #haskell the other day: Make explicit qualification unnecessary whenever there is a *unique* choice of module qualifications from those imported which would make the expression typecheck. Ambiguities would still need to be qualified, but I feel that this would eliminate 99% of all ugly qualified names from code. It would be especially good in the case of infix operators, which as far as I know, nobody actually enjoys qualifying explicitly.
[...]
What do people think of this idea? Personally, it really annoys me whenever I'm forced to give explicit module qualifications, and I think this would really help. It would also subsume the DisambiguateRecordFields extension rather handily.
I think this idea would severely damage compositionality. One example of this is that it would make it substantially less likely that subexpressions could be abstracted into a separate declaration without giving a type signature to fix the type of the new declaration.
Ganesh
Ah, now that does seem a rather good point, the worry being that generalisation happens at the top of that new declaration, thereby suddenly making more than one of the options typecheck, even though the function/value being defined is still used at the appropriate type. That might be enough of a hindrance to kill the idea, yeah, though I wonder exactly how often it would happen relative to the annoyance of always having to make the obvious qualifications. It would be nice to have as an extension at least, I think, to get a sense for this. I wouldn't advocate putting anything in a standard which we haven't actually tried of course. (However, I also think we should also have a bit less respect in regard to keeping things in line with the standard, so long as a compliant implementation exists...) - Cale

Cale Gibbard wrote:
There was a great related idea on #haskell the other day: Make explicit qualification unnecessary whenever there is a *unique* choice of module qualifications from those imported which would make the expression typecheck. Ambiguities would still need to be qualified, but I feel that this would eliminate 99% of all ugly qualified names from code. It would be especially good in the case of infix operators, which as far as I know, nobody actually enjoys qualifying explicitly.
[...]
What do people think of this idea? Personally, it really annoys me whenever I'm forced to give explicit module qualifications, and I think this would really help. It would also subsume the DisambiguateRecordFields extension rather handily.
While I do agree that qualified names are annoying at times, I think that type directed name disambiguation is a Pandora's box. Furthermore, we already have a mechanism for type based disambiguation, namely good old type classes. For instance, the qualifications required when importing Data.Map are actually a sign that we are lacking proper container type classes à la Edison. There are other possible language extension that may make qualification easier, Pascal's with statement comes to mind. http://freepascal.decenturl.com/with-statement-pascal In Haskell, this would work something like this: histogram xs = with Data.Map foldl' f empty xs where f m x = case lookup m x where Just k -> insertWith x (+1) m Nothing -> insert x 1 m In the scope of with , ambiguous qualifications default to Data.Map . Regards, apfelmus -- http://apfelmus.nfshost.com

There are other possible language extension that may make qualification easier, Pascal's with statement comes to mind.
http://freepascal.decenturl.com/with-statement-pascal
In Haskell, this would work something like this:
histogram xs = with Data.Map foldl' f empty xs where f m x = case lookup m x where Just k -> insertWith x (+1) m
I like both that and TDNR. It would be cool that at least one of them gets accepted in Haskell prime. David.

On Tue, Jul 28, 2009 at 1:41 AM, Heinrich
Apfelmus
While I do agree that qualified names are annoying at times, I think that type directed name disambiguation is a Pandora's box.
I see where you are going, but I'm not sure I agree. Let me give an example from another language with this kind of resolution: C++. From a purely practical point of view, function overloading in C++ does what I want almost all the time. And when it doesn't do what I want, it's always been immediately obvious, and it's a sign that my design is flawed. But those cases where it does what I want have been incredibly useful.
Furthermore, we already have a mechanism for type based disambiguation, namely good old type classes. For instance, the qualifications required when importing Data.Map are actually a sign that we are lacking proper container type classes à la Edison.
Perhaps. I think containers is a great example of why you want TDNR. Many containers have *almost* the same interface, but not quite. After all, if the interface was the same, you would just find the most efficient container for that interface and call it a day. So unless you want to go the direction of putting every single container-related function in its own typeclass, I don't think you'll be able to come up with "one container interface to rule them all". To be fair, I'm not against the idea of putting each function in its own typeclass. For work along these lines, see the many "better numeric hierarchy" attempts. But I'd also need support for class aliases or something similar so that defining common subsets of those classes would work easily. Along those lines, what about being able to elide class names when they can be unambiguously determined from the functions defined? instance _ [] where fmap = map pure x = [x] fs <*> xs = [ f x | f <- fs, x <- xs ] return x = [x] m >>= f = concatMap f m This would define Functor, Applicative, and Monad for []. -- ryan

"Ryan" == Ryan Ingram
writes:
Ryan> Along those lines, what about being able to elide class Ryan> names when they can be unambiguously determined from the Ryan> functions defined? Ryan> instance _ [] where fmap = map pure x = [x] fs <*> xs = [ f Ryan> x | f <- fs, x <- xs ] return x = [x] m >>= f = concatMap f Ryan> m Ryan> This would define Functor, Applicative, and Monad for []. What happens if I define a class Foo with a method named fmap? If this were in scope then Functor would no longer be defined for []. Could this situation cause a problem? -- Colin Adams Preston Lancashire

On Jul 29, 2009, at 5:05 AM, Ryan Ingram wrote:
I see where you are going, but I'm not sure I agree. Let me give an example from another language with this kind of resolution: C++.
Right. That settles it: TDNR is a bad idea. Half fun and full earnest. I'm a fan of overloading as done in Ada, but the way C++ does it has always struck me as a mix of under-useful and over-complex, and my experience with it in practice has not been that marvellous. (C++ has far too many types that are _sort of_ compatible, but only sort of.) Interestingly, I've found that when I've thought I've wanted overloading in Haskell, what I've _really_ wanted is typeclasses, because they give me - far more confidence that my code is correct - far more _leverage_; "typeful programming" is amazing.

On Tuesday 28 July 2009 8:27:53 pm Richard O'Keefe wrote:
Right. That settles it: TDNR is a bad idea. Half fun and full earnest.
I'm a fan of overloading as done in Ada, but the way C++ does it has always struck me as a mix of under-useful and over-complex, and my experience with it in practice has not been that marvellous. (C++ has far too many types that are _sort of_ compatible, but only sort of.)
Amusingly enough, one of the major items going into C++0x was concepts, which are an effort to add type class-alike restrictions to C++'s current completely ad-hoc overloading. They were only recently dropped due to disagreements about certain details (I think I read that people couldn't agree whether programmers should be forced to declare the analogue of class instances, or whether the compiler should figure it out, but I haven't paid close attention, so that may be inaccurate). -- Dan

Ryan Ingram wrote:
Heinrich wrote:
While I do agree that qualified names are annoying at times, I think that type directed name disambiguation is a Pandora's box.
I see where you are going, but I'm not sure I agree. Let me give an example from another language with this kind of resolution: C++. From a purely practical point of view, function overloading in C++ does what I want almost all the time. And when it doesn't do what I want, it's always been immediately obvious, and it's a sign that my design is flawed.
But those cases where it does what I want have been incredibly useful.
Sure, overloading is useful. But to avoid headache in a polymorphic language, I'd prefer a principled approach to it. Hence, I'm convinced that there should be only one mechanism for overloading in Haskell; which is type classes at the moment. It appears that type direction name disambiguation can be implemented with (automatically generated) type classes? Something like this class Function_lookup t where lookup :: t instance Function_lookup (k -> [(k,a)] -> Maybe a) where lookup = ... instance Function_lookup (k -> Map k a -> Maybe a) where lookup = ... For each ambiguous function, the compiler creates a type class and corresponding instances and type inference will sort out the rest (or throw a type error). Regards, apfelmus -- http://apfelmus.nfshost.com

Heinrich Apfelmus schrieb:
Sure, overloading is useful. But to avoid headache in a polymorphic language, I'd prefer a principled approach to it. Hence, I'm convinced that there should be only one mechanism for overloading in Haskell; which is type classes at the moment.
It appears that type direction name disambiguation can be implemented with (automatically generated) type classes?
When thinking about how to make Haskell familiar to SQL programmers I also thought that a Haskell compiler might automatically generate a type class and according instances, such that an identifier like 'field' can be used for all record types that have a field with name 'field'. I'm sure Template Haskell programmers can achieve this already with todays GHC. In the meantime I'm happy with writing qualifications, type signatures and so on. I don't know why people like to avoid them at all costs.

On Tue, Jul 28, 2009 at 10:05:29AM -0700, Ryan Ingram wrote:
On Tue, Jul 28, 2009 at 1:41 AM, Heinrich Apfelmus
wrote: While I do agree that qualified names are annoying at times, I think that type directed name disambiguation is a Pandora's box.
I see where you are going, but I'm not sure I agree. Let me give an example from another language with this kind of resolution: C++. From a purely practical point of view, function overloading in C++ does what I want almost all the time. And when it doesn't do what I want, it's always been immediately obvious, and it's a sign that my design is flawed.
I would be careful about assuming aspects of C++ will translate to haskell at the type system level, The reason is that in C++, all type information flows in one direction, for instance. int foo(...) { ... int x = bar(z,y); ... } all of the types of everything passed to bar and what it returns (x,y, and z) are fully specified at every call site, so 'name overloading' is a simple matter of finding the best match. however in haskell this isn't the case. 'bar' may affect the types used in 'foo'. there is two way type information passing in haskell. This radically changes the landscape for what is possible in the two type systems. John -- John Meacham - ⑆repetae.net⑆john⑈ - http://notanumber.net/

Cale Gibbard
There was a great related idea on #haskell the other day: Make explicit qualification unnecessary whenever there is a *unique* choice of module qualifications from those imported which would make the expression typecheck.
[...]
This would mean that if we had, say, Data.List, Data.Map and Data.Set imported, and there was an occurrence of insert that happened to be applied to a couple of values and then something known to be a Map,
[...]
What do people think of this idea? Personally, it really annoys me whenever I'm forced to give explicit module qualifications, and I think this would really help. It would also subsume the DisambiguateRecordFields extension rather handily.
My favorite annoyance is repeated import lines for each library just to be able to use some unique identifiers unqualified, e.g.: import qualified Data.ByteString as B import Data.ByteString (ByteString) import qualified Data.Map as M import Data.Map (Map) and so on. I'm all for it, if for no other reason, then just to get rid of this. But I agree about the syntax: leave the dot out of it. -k -- If I haven't seen further, it is by standing in the footprints of giants

Ketil Malde wrote:
Cale Gibbard writes:
There was a great related idea on #haskell the other day: Make explicit qualification unnecessary whenever there is a *unique* choice of module qualifications from those imported which would make the expression typecheck.
My favorite annoyance is repeated import lines for each library just to be able to use some unique identifiers unqualified, e.g.:
import qualified Data.ByteString as B import Data.ByteString (ByteString) import qualified Data.Map as M import Data.Map (Map)
and so on. I'm all for it, if for no other reason, then just to get rid of this.
Note that there are alternative solution for this particular problem. For instance, a version of qualified with different semantics will do; something like this import Data.List import sometimes qualified Data.Map as Map foo :: Map k a -- accepted with out qualifier 'Map' -- because it's unambiguous bar m = map show m -- defaults to Data.List.map , -- 'Map' prefix would be need in -- cases of ambiguity The idea being that names only need to be qualified when they are ambiguous, which Map and ByteString are not. Regards, apfelmus -- http://apfelmus.nfshost.com

Heinrich Apfelmus schrieb:
Note that there are alternative solution for this particular problem. For instance, a version of qualified with different semantics will do; something like this
import Data.List import sometimes qualified Data.Map as Map
Isn't that quite the same as import Data.Map as Map ? But you risk breaking packages when new qualifiers are added to imported modules.
foo :: Map k a -- accepted with out qualifier 'Map' -- because it's unambiguous
bar m = map show m -- defaults to Data.List.map , -- 'Map' prefix would be need in -- cases of ambiguity
The idea being that names only need to be qualified when they are ambiguous, which Map and ByteString are not.

Henning Thielemann wrote:
Heinrich Apfelmus schrieb:
Note that there are alternative solution for this particular problem. For instance, a version of qualified with different semantics will do; something like this
import Data.List import sometimes qualified Data.Map as Map
Isn't that quite the same as
import Data.Map as Map
?
Not quite. The intended difference is that ambiguous names default to the module that imports them unqualified. I.e. import Data.List import sometimes qualified Data.Map as Map map -- Data.List.map or Data.Map.map ? will silently default to Data.List.map .
But you risk breaking packages when new qualifiers are added to imported modules.
Yeah, that's kinda unavoidable if you don't want to qualify so many names. Regards, apfelmus -- http://apfelmus.nfshost.com

One issue I have which I haven't seen anyone mention is that it's not useful with qualified names, by which I mean always importing qualified. Of course if you have no problem always using qualified names, the problem this extension is solving doesn't exist. Though I do like short names I'm not terribly bothered by writing Map.map and List.map. Most calls in a module are within the module after all, which is as it should be in most cases. So this extension would do nothing for me. I like the explicitness of qualified names, and I find it hard to read someone's module when they call some function that comes somewhere out of a list of 15 imports at the top, and this extension would make it even harder to find the definition of the function... though tags would narrow down the search a lot. But with modules, often the prepended module name is all the information I need at the moment. On the other hand, I do acknowledge that I'm pretty used to seeing x.y in an OO language and often don't mind that I need to know the type of 'x' and maybe even find the constructor call to know where to look for 'y'. So maybe it's not that big of a deal.

simonpj wrote: "What do I envy about object-oriented languages? Not state, not subtyping, not inheritance. But I do envy the power of type-based name resolution. Here's what I mean: Programers can explore a library, with the help of an IDE, by typing "x.", at which point a list of x's methods pops up." That feature is not really from OOP, as he also wrote. It is possible just because in Java or C++ you write the object first and then the method, connected with a dot. So, we could say that most of the OOP languages uses RPN (Reverse Polish Notation), but only for one argument: the object itself. If Haskell would have been designed for RPN, it would be natural to have the same IDE power as the OOP languajes, in a more natural way. For example, as you write an integer the IDE could offer you the unary functions that can be applied to an integer, but if you enter a string after it, the IDE would offer you the functions of two arguments, one of them of type string and the other of type integer. That would be much more powerfull indeed than the OOP case. It could be used not only on an IDE, but also on the interpreter, and not only with declared names, but also with implicit types like numbers, text strings, tuples, enum elements, ... It would be easier work for the compiler as well. So, I think that it was not a good design decision not to use RPN as the basic notation for Haskell, but it is late for changing it :( .

Enrique
So, I think that it was not a good design decision not to use RPN as the basic notation for Haskell, but it is late for changing it :( .
I don't think you want that anyway. First of all, meet van Laarhoven lenses [1]: x ^. field . subfield This has the order you know from OOP, so the basic syntactic support for quick method auto-suggestion is there. You don't even need Control.Category, because the (.) is actually the regular function composition operator. So where to go from here? One advantage of Haskell is that there is no such thing as a method. It's all functions and values, and you can actually establish a measure for how exactly a type captures another. Now just auto-suggest /all/ functions in scope sorted by how exactly their type matches. You can do that without lenses, if you have a smart editor: stdout ` Now it might display something like this: stdout `[hPutStrLn ] [hSetBuffering] [hClose ] [... ] [const ] [id ] Finally when you select one of the functions it rewrites it to: hPutStrLn stdout Of course in a real editor you would also show the type signature and probably also the module from where it was imported. I consider the record problem solved in Haskell. Greets, Ertugrul [1]: http://hackage.haskell.org/package/lens -- Not to be or to be and (not to be or to be and (not to be or to be and (not to be or to be and ... that is the list monad.
participants (23)
-
Achim Schneider
-
Ben Lippmeier
-
Bulat Ziganshin
-
Cale Gibbard
-
Colin Paul Adams
-
Dan Doel
-
david48
-
Enrique
-
Ertugrul Söylemez
-
Evan Laforge
-
Heinrich Apfelmus
-
Henning Thielemann
-
Jason Dagit
-
Johannes Waldmann
-
John A. De Goes
-
John Dorsey
-
John Meacham
-
Jules Bean
-
Ketil Malde
-
Richard O'Keefe
-
Ryan Ingram
-
Sittampalam, Ganesh
-
Twan van Laarhoven