Re: TDNR without new operators or syntax changes

Yes, that it indeed was I meant. AntC seems to be replying to a much more complicated/invasive proposal than what I had intended, apologies if I wasn't clear. (I see in retrospect that I may have misunderstood the original TDNR proposal, understandably leading to confusion.) 1. If the compiler encounters a term f a, and there is more than one definition for f in scope (after following all of the usual rules for qualified imports); 2. And exactly one of these definitions matches the type of a (or the expected type of f if given); 3. Then this is the definition to use. That is all, although point 2 could be extended to consider the return type of f or other arguments as well. Even with the extension on, it would have no effect on programs which compile without it. This has nothing to do with ORF, which relies on magic type classes (although there is some overlap in what it can achieve). The primary use-case I had in mind is disambiguating name clashes between local and/or unqualified imports. Cross-posting to cafe and libraries as this doesn't seem to have attracted a lot of interest in users. Maybe it's just a boring proposal, maybe I didn't post it to the right list.

On Sun, 22 May 2016, Jeremy . wrote:
1. If the compiler encounters a term f a, and there is more than one definition for f in scope (after following all of the usual rules for qualified imports);
2. And exactly one of these definitions matches the type of a (or the expected type of f if given);
3. Then this is the definition to use.
I know people are unhappy with Haskell's records and module system, but I still think that's because these language features are not used properly. Type classes are the tool to write generic code and reduce combinatoric explosion of functions and modules are a way to collect functions per type. Following this principle you give function names that make sense together with the module name like File.write or Channel.write. Then there is no need for the compiler to disambiguate unqualified identifiers and you keep the type and the module issues separated. Today, if you stick to the unqualified style, and I guess you want to import modules without explicit identifier list, too, then your code can break whenever the imported modules add identifiers. In order to prevent this you have to use tight package import version bounds in your Cabal file. Your proposal would not solve this problem. Type-driven name resolution may disambiguate identifiers in some case but not in all ones, thus you still need to use tight version bounds. The proposal seems to be a big complication. The compiler can no longer emit a message: "Type of function f mismatch", but it has to emit "If f means A.f, then the type error would be ..., but if it means B.f, then the type error would be ..." Or even worse: "If f means A.f and g means C.g, then the type error would be ... if f means B.f and g means C.g ... or f means A.f and g means D.g ..." The proposal also is more error-prone, because if you make a type-error this could be shadowed by a function in scope with a type that matches accidentally. A function with a very generic type and a generic name might be such an accidental match. That said, the more complicated the proposals become the more one should reconsider about whether one runs in the right direction. I think this is the case here. The simple answer could be: Switch to a cleaner coding style!

Henning Thielemann wrote
I know people are unhappy with Haskell's records and module system, but I still think that's because these language features are not used properly. Type classes are the tool to write generic code and reduce combinatoric explosion of functions and modules are a way to collect functions per type. Following this principle you give function names that make sense together with the module name like File.write or Channel.write. Then there is no need for the compiler to disambiguate unqualified identifiers and you keep the type and the module issues separated.
The issue is with importing third-party modules, where you can't combine them into sensible type classes without writing you own abstraction layer. And in some cases, the similarities are in name only, so type classes wouldn't work in any case. -- View this message in context: http://haskell.1045720.n5.nabble.com/TDNR-without-new-operators-or-syntax-ch... Sent from the Haskell - Glasgow-haskell-users mailing list archive at Nabble.com.

Jeremy . wrote:
Yes, that it indeed was I meant. AntC seems to be replying to a much more complicated/invasive proposal than what I had intended, apologies if I wasn't clear. (I see in retrospect that I may have misunderstood the original TDNR proposal, understandably leading to confusion.)
Your client broke the thread (it didn't add any References or In-Reply-To headers, and in my case I received the haskell-cafe copy first, so matching by Subject didn't work either). In any case this paragraph wouldn't make sense to anyone on the haskell-cafe or libraries mailing lists.
1. If the compiler encounters a term f a, and there is more than one definition for f in scope (after following all of the usual rules for qualified imports);
2. And exactly one of these definitions matches the type of a (or the expected type of f if given);
3. Then this is the definition to use.
I now find that Anthony's concerns are justified. The question is, what exactly does the type matching in step 2 involve? If it is a recursive call to the type-checker then you'll have a huge performance problem. If, on the other hand, you only take into account what is known about the type of a at a given time, then you need special treatment for unambiguous names or even trivial programs will fail to type-check, just as Anthony said. There's another concern: will type checking still be predictable? How do you explain type checking errors that arise from ambiguity to the user?
That is all, although point 2 could be extended to consider the return type of f or other arguments as well. Even with the extension on, it would have no effect on programs which compile without it.
Right, so what is special about the first argument? Do you have any use case in mind that goes beyond record field selectors? Cheers, Bertram

Bertram Felgenhauer-2 wrote
1. If the compiler encounters a term f a, and there is more than one definition for f in scope (after following all of the usual rules for qualified imports);
2. And exactly one of these definitions matches the type of a (or the expected type of f if given);
3. Then this is the definition to use.
I now find that Anthony's concerns are justified. The question is, what exactly does the type matching in step 2 involve? If it is a recursive call to the type-checker then you'll have a huge performance problem.
I was concerned about this, but not knowing anything about how type-checking/inference is implemented, I wouldn't know if this is actually a problem or not.
If, on the other hand, you only take into account what is known about the type of a at a given time, then you need special treatment for unambiguous names or even trivial programs will fail to type-check, just as Anthony said.
Why special treatment for unambiguous names? They shouldn't be effected at all by this. -- View this message in context: http://haskell.1045720.n5.nabble.com/TDNR-without-new-operators-or-syntax-ch... Sent from the Haskell - Glasgow-haskell-users mailing list archive at Nabble.com.

Thanks for starting this discussion! Having spent more time thinking about record field overloading than perhaps I should, here are some things to think about... On 22/05/16 16:01, Jeremy wrote:
Bertram Felgenhauer-2 wrote
1. If the compiler encounters a term f a, and there is more than one definition for f in scope (after following all of the usual rules for qualified imports);
2. And exactly one of these definitions matches the type of a (or the expected type of f if given);
3. Then this is the definition to use.
I now find that Anthony's concerns are justified. The question is, what exactly does the type matching in step 2 involve? If it is a recursive call to the type-checker then you'll have a huge performance problem.
I was concerned about this, but not knowing anything about how type-checking/inference is implemented, I wouldn't know if this is actually a problem or not.
Unfortunately this is indeed a problem. When the type-checker encounters `f a`, it infers the type of `f`, ensures it is a function type `s -> t`, then checks that `a` has type `s`. But if there are multiple possible `f`s in scope, what should it do? Options include: 1. run the entire type-checking process for each possible type of `f` (this is almost certainly too slow); 2. give ad-hoc rules to look at type information from the context or the inferred type of the argument `a`; 3. introduce a name resolution constraint and defer it to the constraint solver (the same way that solving for typeclasses works); 4. require a type signature in a fixed location. Both 2 and 3 would need careful specification, as they run the risk of exposing the type-checking algorithm to the user, and changing the typing rules depending on what is in scope may be problematic. In particular, if `f` has a higher-rank type than bidirectional type inference is likely to break down. The DuplicateRecordFields approach is to make use of bidirectional type inference (a bit like 2, but without looking at the type of the argument) and otherwise require a type signature. This is carefully crafted to avoid needing to change existing typing rules. The wiki page [1] sets out some of the cases in which this does and doesn't work. Point 3 is rather like the magic classes in the OverloadedRecordFields proposal [2] (this isn't in GHC HEAD yet, but an early version is on Phab [3]).
If, on the other hand, you only take into account what is known about the type of a at a given time, then you need special treatment for unambiguous names or even trivial programs will fail to type-check, just as Anthony said.
Why special treatment for unambiguous names? They shouldn't be effected at all by this.
There are some design choices here as well, separately from the options above. Some possibilities are: A. leave unambiguous names alone, and only do the special thing for the ambiguous ones (this doesn't break existing code, but can lead to confusing errors if an import is changed to introduce or remove an ambiguity, especially for options 2 or 3 above); B. do the same thing for unambiguous names as ambiguous ones (but this may well break existing code, and is likely to restrict inference too much); C. require an explicit syntactic cue that the name should be treated specially (e.g. the original TDNR's use of dot, or the # sign in the OverloadedLabels part of ORF [4]). As you can see, there are quite a few complex trade-offs here. I suspect this is at least part of the reason for the slow progress on TDNR-like extensions. It will be interesting to see how DuplicateRecordFields is received! All the best, Adam [1] https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Duplica... [2] https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/MagicCl... [3] https://phabricator.haskell.org/D1687 [4] https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Overloa... -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/

As a supplement, here's a series of definitions to think about. The
question is: what should happen in each section, and why? The more
detailed the answer, the better. Definitions from previous sections
are in scope in subsequent ones, for convenience. The examples are
arranged in a slippery slope, but there may be more good, confusing
examples that I missed.
--- cut ---
-- 1 --
f :: T -> Int
f t = ...
f :: U -> Int -> Char
f u = ...
t :: T
t = ...
u :: U
u = ...
i1 :: Int
i1 = f t
-- 2 --
g :: (T -> Int) -> Int
g h = h t
i2 :: Int
i2 = g f
-- 3 --
v :: T
v = t
v :: U
v = u
i3 :: Int
i3 = f v
-- 4 --
class C a where
c :: a -> Int
instance C T where
c = f
i4 :: Int
i4 = c v
-- 5 --
main = print $ f v
--- cut ---
-- Dan
On Tue, May 24, 2016 at 4:07 AM, Adam Gundry
Thanks for starting this discussion! Having spent more time thinking about record field overloading than perhaps I should, here are some things to think about...
On 22/05/16 16:01, Jeremy wrote:
Bertram Felgenhauer-2 wrote
1. If the compiler encounters a term f a, and there is more than one definition for f in scope (after following all of the usual rules for qualified imports);
2. And exactly one of these definitions matches the type of a (or the expected type of f if given);
3. Then this is the definition to use.
I now find that Anthony's concerns are justified. The question is, what exactly does the type matching in step 2 involve? If it is a recursive call to the type-checker then you'll have a huge performance problem.
I was concerned about this, but not knowing anything about how type-checking/inference is implemented, I wouldn't know if this is actually a problem or not.
Unfortunately this is indeed a problem. When the type-checker encounters `f a`, it infers the type of `f`, ensures it is a function type `s -> t`, then checks that `a` has type `s`. But if there are multiple possible `f`s in scope, what should it do? Options include:
1. run the entire type-checking process for each possible type of `f` (this is almost certainly too slow);
2. give ad-hoc rules to look at type information from the context or the inferred type of the argument `a`;
3. introduce a name resolution constraint and defer it to the constraint solver (the same way that solving for typeclasses works);
4. require a type signature in a fixed location.
Both 2 and 3 would need careful specification, as they run the risk of exposing the type-checking algorithm to the user, and changing the typing rules depending on what is in scope may be problematic. In particular, if `f` has a higher-rank type than bidirectional type inference is likely to break down.
The DuplicateRecordFields approach is to make use of bidirectional type inference (a bit like 2, but without looking at the type of the argument) and otherwise require a type signature. This is carefully crafted to avoid needing to change existing typing rules. The wiki page [1] sets out some of the cases in which this does and doesn't work.
Point 3 is rather like the magic classes in the OverloadedRecordFields proposal [2] (this isn't in GHC HEAD yet, but an early version is on Phab [3]).
If, on the other hand, you only take into account what is known about the type of a at a given time, then you need special treatment for unambiguous names or even trivial programs will fail to type-check, just as Anthony said.
Why special treatment for unambiguous names? They shouldn't be effected at all by this.
There are some design choices here as well, separately from the options above. Some possibilities are:
A. leave unambiguous names alone, and only do the special thing for the ambiguous ones (this doesn't break existing code, but can lead to confusing errors if an import is changed to introduce or remove an ambiguity, especially for options 2 or 3 above);
B. do the same thing for unambiguous names as ambiguous ones (but this may well break existing code, and is likely to restrict inference too much);
C. require an explicit syntactic cue that the name should be treated specially (e.g. the original TDNR's use of dot, or the # sign in the OverloadedLabels part of ORF [4]).
As you can see, there are quite a few complex trade-offs here. I suspect this is at least part of the reason for the slow progress on TDNR-like extensions. It will be interesting to see how DuplicateRecordFields is received!
All the best,
Adam
[1] https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Duplica...
[2] https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/MagicCl...
[3] https://phabricator.haskell.org/D1687
[4] https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Overloa...
-- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users

Thank you for feeding my thoughts. How would this do for a slightly more detailed definition? 1. If the compiler encounters an ambiguous function, it will temporarily give it the type a -> b, or the type declared in the signature if there is one. 2. Type inference completes as normal. 3. If the inferred or declared type for an ambiguous name is sufficient to disambiguate it, it will be bound to the correct definition. Dan Doel wrote
f :: T -> Int f t = ...
f :: U -> Int -> Char f u = ...
t :: T t = ...
u :: U u = ...
i1 :: Int i1 = f t
Solving for everything but f, we get f :: T -> Int.
g :: (T -> Int) -> Int g h = h t
i2 :: Int i2 = g f
Solving for everything but f, we get f :: T -> Int.
v :: T v = t
v :: U v = u
i3 :: Int i3 = f v
May not be solvable, would fail to disambiguate. -- View this message in context: http://haskell.1045720.n5.nabble.com/TDNR-without-new-operators-or-syntax-ch... Sent from the Haskell - Glasgow-haskell-users mailing list archive at Nabble.com.

On Thu, May 26, 2016 at 5:14 AM, Peter
Solving for everything but f, we get f :: T -> Int.
So TDNR happens for things in function position (applied to something).
Solving for everything but f, we get f :: T -> Int.
So TDNR happens for things in argument position.
May not be solvable, would fail to disambiguate.
But there's exactly one combination of f and v definitions that will succeed with the right type. So why doesn't that happen? Here's an intermediate step: i1' x = f x :: Int What happens there? Another way to phrase the question is: why would TDNR only disambiguate based on argument types of functions and not on return types? Why would function types even be a factor at all? And to be honest, I don't really think the description of what the type checker would be doing is detailed enough. If there are two ambiguous candidates how does type checking proceed with 'the type in the declared signature' when there are two possible signatures which may be completely different? Is it doing backtracking search? How do you add backtracking search to GHC's inference algorithm? Etc. The later examples are designed to raise questions about this, too. I'm rather of the opinion that TDNR just isn't a good feature for most things. Implicit in the famous "How to Make Ad-hoc Polymorphism Less Ad-hoc" is that being ad-hoc is bad. And type classes fix that by turning overloading into something that happens via an agreed upon interface, with declared conventions, and which can be abstracted over well. But TDNR has none of that, so to my mind, it's not really desirable, except possibly in cases where there is no hope of being abstract (for instance, Agda does some TDNR for constructors in patterns, and there isn't much basis in the underlying theory for trying to abstract over doing induction on completely different types with similarly named pieces; so it's more acceptable). But also, for something as far reaching as doing TDNR for every ambiguous name, it's not terribly clear to me what a good algorithm even is, unless it's only good enough to handle really simple examples, and just doesn't work most of the time (and even then, I'm not sure it's been thought through enough to say that it's a simple addition to GHC's type checking). -- Dan

Dan Doel
writes: On Thu, May 26, 2016 at 5:14 AM, Peter
wrote: Solving for everything but f, we get f :: T -> Int. So TDNR happens for things in function position (applied to something).
Before we get carried away, TDNR doesn't happen at all. You're speculating about what might be able to happen?
Solving for everything but f, we get f :: T -> Int.
So TDNR happens for things in argument position.
[By "things" there you mean functions.] TDNR as originally spec'd wouldn't try to solve this. (Because there's no dot-postfix invocation.) And my reading of DuplicateRecordFields is that won't either. There's a good reason. We're trying to solve for record field accessors. So unlike Dan's examples, there's a bunch of same-named functions: personId :: T -> Int personId :: U -> Int personId :: V -> Int personId's are always Ints. There's no point trying to work 'outside in', because it won't disambiguate anythng.
May not be solvable, would fail to disambiguate.
But there's exactly one combination of f and v definitions that will succeed with the right type. So why doesn't that happen?
Because in general you have to run the whole of type inference to solve this case. (That's Peter's step 2. in his earlier message.) But we can't run type inference until we've disambiguated all names. Chicken and egg.
... Another way to phrase the question is: why would TDNR only disambiguate based on argument types of functions and not on return types? ...
Because, per above, the return types are all the same (for same-named field accessors).
... Is it doing backtracking search? How do you add backtracking search to GHC's inference algorithm? Etc.
No GHC does not now do backtracking search. No it couldn't be somehow bolted on. There's no guarantee that adding backtracking could resolve any more cases that can be solved now, because now we have hugely powerful inference honed for decades, and type system design to exploit it.
... And type classes fix that by turning overloading into something that happens via an agreed upon interface, with declared conventions, and which can be abstracted over well. ...
Yes, so the full program for ORF is to make 'Magic Type Classes' seem to the type inferencer like regular class-based overloading.
But also, for something as far reaching as doing TDNR for every ambiguous name, it's not terribly clear to me what a good algorithm even is, unless it's only good enough to handle really simple examples, and just doesn't work most of the time ...
DuplicateRecordFields is aiming for the simple examples. If some case isn't simple enough, you can always add signatures until it is. AntC

As long as were back on this topic again (sort of), and just to
satisfy my curiousity, what would happen if you tried to do records
just like C structs? So e.g. a•b requires 'a' to be a record with a
'b' field, and is just one identifier, no functions involved, and 'b'
is not a separate value.
I could see how people might think it was not powerful enough or not
very idiomatic, but is there something about haskell that would make
it just not work?
I didn't see that option on the increasingly all-encompassing Records wiki page.
On Fri, May 27, 2016 at 10:33 PM, AntC
Dan Doel
writes: On Thu, May 26, 2016 at 5:14 AM, Peter
wrote: Solving for everything but f, we get f :: T -> Int. So TDNR happens for things in function position (applied to something).
Before we get carried away, TDNR doesn't happen at all. You're speculating about what might be able to happen?
Solving for everything but f, we get f :: T -> Int.
So TDNR happens for things in argument position.
[By "things" there you mean functions.]
TDNR as originally spec'd wouldn't try to solve this. (Because there's no dot-postfix invocation.) And my reading of DuplicateRecordFields is that won't either.
There's a good reason. We're trying to solve for record field accessors. So unlike Dan's examples, there's a bunch of same-named functions: personId :: T -> Int personId :: U -> Int personId :: V -> Int
personId's are always Ints. There's no point trying to work 'outside in', because it won't disambiguate anythng.
May not be solvable, would fail to disambiguate.
But there's exactly one combination of f and v definitions that will succeed with the right type. So why doesn't that happen?
Because in general you have to run the whole of type inference to solve this case. (That's Peter's step 2. in his earlier message.) But we can't run type inference until we've disambiguated all names. Chicken and egg.
... Another way to phrase the question is: why would TDNR only disambiguate based on argument types of functions and not on return types? ...
Because, per above, the return types are all the same (for same-named field accessors).
... Is it doing backtracking search? How do you add backtracking search to GHC's inference algorithm? Etc.
No GHC does not now do backtracking search. No it couldn't be somehow bolted on. There's no guarantee that adding backtracking could resolve any more cases that can be solved now, because now we have hugely powerful inference honed for decades, and type system design to exploit it.
... And type classes fix that by turning overloading into something that happens via an agreed upon interface, with declared conventions, and which can be abstracted over well. ...
Yes, so the full program for ORF is to make 'Magic Type Classes' seem to the type inferencer like regular class-based overloading.
But also, for something as far reaching as doing TDNR for every ambiguous name, it's not terribly clear to me what a good algorithm even is, unless it's only good enough to handle really simple examples, and just doesn't work most of the time ...
DuplicateRecordFields is aiming for the simple examples. If some case isn't simple enough, you can always add signatures until it is.
AntC
_______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users

Evan Laforge
writes:
... what would happen if you tried to do records just like C structs? So e.g. a•b requires 'a' to be a record with a 'b' field, and is just one identifier, no functions involved, and 'b' is not a separate value.
Hi Evan, um, that's the original TDNR isn't it? http://hackage.haskell.org/trac/haskell-prime/wiki/ TypeDirectedNameResolution As per Jeremy's ref in the first post in this thread. If you're talking C then that blobby thing between a and b should be a dot(?) as postfix operator. And it is an operator, not "just one identifier". Then if you intend that dot bind tighter than function apply, but not as tight as Module name prefixes, you have the original TDNR. If you intend some operator symbol other than dot, there's a risk existing code is already using it, and/or it's not ASCII. And you're introducing an operator with very non-standard binding rules. (The reason SPJ chose dot, apart from that following other languages, is there's already a precedent for it being tight-binding in Module prefixes.)
I didn't see that option on the increasingly all-encompassing Records wiki page.
It's linked as Solution 2 on that records page, called "TDNR". Surely there can't be any records approaches that haven't been aired by now? And GHC's direction is set. AntC

On Sat, May 28, 2016 at 3:13 AM, AntC
Evan Laforge
writes: ... what would happen if you tried to do records just like C structs? So e.g. a•b requires 'a' to be a record with a 'b' field, and is just one identifier, no functions involved, and 'b' is not a separate value.
Hi Evan, um, that's the original TDNR isn't it? http://hackage.haskell.org/trac/haskell-prime/wiki/ TypeDirectedNameResolution As per Jeremy's ref in the first post in this thread.
If you're talking C then that blobby thing between a and b should be a dot(?) as postfix operator. And it is an operator, not "just one identifier".
That's why I was trying to emphasize "not an operator". TDNR is complicated is because the field name is a first class value, so it's a function applied to a record, and now we have to disambiguate the function. I didn't meant that. I guess C (and Java and all the others) does call "." an operator because it parses as one and has precedence, but it works with identifiers, not values, and has special grammar. It's not an operator from the haskell point of view. The C compiler isn't doing any overloading or disambiguation. As far as I understand it the implementation is straightforward. Could ghc do something similarly straightforward? It already does for qualified module names, and in the same way in M.a, ".a" is not a function that gets something out of an M.

Evan Laforge
writes: That's why I was trying to emphasize "not an operator". TDNR is complicated because ...
Peter voldermort writes: A slightly more refined definition for disambiguation: ...
Hi Evan, Peter, (and even James), I'm not seeing you're proposing anything that's significantly different to DuplicateRecordFields. That has the advantage we can use it now. If you are proposing something different, you need to explain in a lot more detail, so that we can see the advantages. So [ref Evan] even though a field name is a first-class function usually, DuplicateRecordFields only gets triggered where you use the bare name. [Ref Peter] I'm not seeing why you're talking about two passes, but that does not sound like a robust approach. (Can you be sure two passes is enough? If it is enough, why can't the second pass's logic get built into the first?) Yes DuplicateRecordFIelds is a bit of a hack. The proper solution (MagicTypeClasses) is still awaited. Can you explain why GHC should depart from that plan? AntC

AntC wrote
I'm not seeing you're proposing anything that's significantly different to DuplicateRecordFields. That has the advantage we can use it now.
If you are proposing something different, you need to explain in a lot more detail, so that we can see the advantages.
So [ref Evan] even though a field name is a first-class function usually, DuplicateRecordFields only gets triggered where you use the bare name.
[Ref Peter] I'm not seeing why you're talking about two passes, but that does not sound like a robust approach. (Can you be sure two passes is enough? If it is enough, why can't the second pass's logic get built into the first?)
DRF only works for records, but brings polymorphism, duplicate definitions in the same module, and other goodies. This proposal will work for any function, but only helps with disambiguation. There is some overlap, but it is a different solution for different requirements. -- View this message in context: http://haskell.1045720.n5.nabble.com/TDNR-without-new-operators-or-syntax-ch... Sent from the Haskell - Glasgow-haskell-users mailing list archive at Nabble.com.

A slightly more refined definition for disambiguation: 1) If a type signature has been supplied for an ambiguous name, GHC will attempt to disambiguate with the type signature alone. 2) If the name is a function applied to an explicit argument, and the type of the argument can be inferred without disambiguating anything (or is supplied as a type signature), then GCH will attempt to disambiguate by matching the type of the argument. This is sufficient to cover all of the motivating examples proposed for TDNR, and every case where I have personally wanted it. It does not require backtracing (although it may require two passes to disambiguate on the inferred type of an argument), and disambiguating one name will not depend on disambiguating any other name. -- View this message in context: http://haskell.1045720.n5.nabble.com/TDNR-without-new-operators-or-syntax-ch... Sent from the Haskell - Glasgow-haskell-users mailing list archive at Nabble.com.
participants (9)
-
Adam Gundry
-
AntC
-
Bertram Felgenhauer
-
Dan Doel
-
Evan Laforge
-
Henning Thielemann
-
Jeremy
-
Jeremy .
-
Peter