Overloading functions based on arguments?

Hi, I just came across a problem like this: Suppose I've got two related functions that do similar things, and I want to call them the same... Like in: foobar :: String -> Int -> Int foobar :: Int -> String -> Int (Bad example, but I hope you got the point.) Is this kind of overloading (instead of the polymorphism based overloading) possible in Haskell? Namely to have two functions with the same name but different signatures so they could be distinguished by a call's parameters? I fear not... So I guess I have to name the functions differently, right? Thanks, Daniel

class Foobar a b where
foobar :: a -> b -> Int
instance Foobar String Int where ...
instance Foobar Int String where ...
2009/2/13 Daniel Kraft
Hi,
I just came across a problem like this: Suppose I've got two related functions that do similar things, and I want to call them the same... Like in:
foobar :: String -> Int -> Int foobar :: Int -> String -> Int
(Bad example, but I hope you got the point.)
Is this kind of overloading (instead of the polymorphism based overloading) possible in Haskell? Namely to have two functions with the same name but different signatures so they could be distinguished by a call's parameters? I fear not... So I guess I have to name the functions differently, right?
Thanks, Daniel
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Fri, 2009-02-13 at 13:25 +0300, Eugene Kirpichov wrote:
class Foobar a b where foobar :: a -> b -> Int
instance Foobar String Int where ... instance Foobar Int String where ...
But we typically do not to this. It's ugly. Classes work nicely when there is some kind of parametrisation going on, where a function can work with any instance of some interface. Ad-hoc overloading in the style of Java/C++ just isn't done, even though it can be encoded by the above trick. In the simple case just us a different name. If you would have lots of variations then consider other approaches like passing a data type containing some of the arguments (since that can encode alternatives). Duncan
2009/2/13 Daniel Kraft
: Hi,
I just came across a problem like this: Suppose I've got two related functions that do similar things, and I want to call them the same... Like in:
foobar :: String -> Int -> Int foobar :: Int -> String -> Int
(Bad example, but I hope you got the point.)
Is this kind of overloading (instead of the polymorphism based overloading) possible in Haskell? Namely to have two functions with the same name but different signatures so they could be distinguished by a call's parameters? I fear not... So I guess I have to name the functions differently, right?
Thanks, Daniel

If you have two functions that do two different things, then they
certainly OUGHT to have different names.
You can of course put the two functions in different modules. Then
they do have different (qualified) names.
2009/2/13 Daniel Kraft
Hi,
I just came across a problem like this: Suppose I've got two related functions that do similar things, and I want to call them the same... Like in:
foobar :: String -> Int -> Int foobar :: Int -> String -> Int
(Bad example, but I hope you got the point.)
Is this kind of overloading (instead of the polymorphism based overloading) possible in Haskell? Namely to have two functions with the same name but different signatures so they could be distinguished by a call's parameters? I fear not... So I guess I have to name the functions differently, right?
Thanks, Daniel
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Colin Adams wrote:
If you have two functions that do two different things, then they certainly OUGHT to have different names.
Well, they do "the same thing" but for different arguments; it's like this: Table is a table of name-value pairs I want to substitute in a tree-like structure using: substitute :: Table -> Tree -> Tree For substituting a single name-value pair I want to define this utitlity routine so I don't have to construct a Table all the time in the user code: substitute :: String -> Value -> Tree -> Tree In the case I believe it would certainly be good to be able to name both functions the same, but I fear I can not do so? There are languages where this is explicitelly allowed (e.g. C++ or Java), so I don't think it is such an unuseful or evil thing. Daniel

2009/2/13 Daniel Kraft
Colin Adams wrote:
If you have two functions that do two different things, then they certainly OUGHT to have different names.
Well, they do "the same thing" but for different arguments; it's like this:
Table is a table of name-value pairs I want to substitute in a tree-like structure using:
substitute :: Table -> Tree -> Tree
For substituting a single name-value pair I want to define this utitlity routine so I don't have to construct a Table all the time in the user code:
substitute :: String -> Value -> Tree -> Tree
In the case I believe it would certainly be good to be able to name both functions the same, but I fear I can not do so? There are languages where this is explicitelly allowed (e.g. C++ or Java), so I don't think it is such an unuseful or evil thing.
That's probably not Evil, but it's much clearer to know what something is by looking at its name (or the name of the function used on). So, substituteOne and substituteMany are much clearer... Thu

Hi
Table is a table of name-value pairs I want to substitute in a tree-like structure using:
substitute :: Table -> Tree -> Tree
For substituting a single name-value pair I want to define this utitlity routine so I don't have to construct a Table all the time in the user code:
substitute :: String -> Value -> Tree -> Tree
Why not: substituteValue :: String -> Value -> Tree -> Tree substituteValue x y = substitute (table1 x y)
In the case I believe it would certainly be good to be able to name both functions the same, but I fear I can not do so? There are languages where this is explicitelly allowed (e.g. C++ or Java), so I don't think it is such an unuseful or evil thing.
Languages like C++ and Java allow mutable state, object-orientated programming and require massively verbose code - all of which are unuseful and evil :-) I think this is a case of trying to apply C++/Java thoughts on to Haskell, you can map the concepts directly, but you really shouldn't. Try writing multiple methods with many names, or simple utility functions to convert between the cases, and it will go much nicer. Thanks Neil

Chances are the program you're using to write your e-mails was written in C++ (or at least C), so don't knock it. :-) In any case, no one has really addressed the original poster's question: No, "name overloading" is not possible in Haskell, and surprisingly, there are no blocking technical issues why this must be the case. As a result of this limitation, we end up with abuse of type classes and endless synonyms, suffixes, and postfixes, and funky operators whose meanings must be inferred from documentation rather than convention. Name overloading can certainly be abused, but in my opinion, the lack of it results in more problems than it eliminates. Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101 On Feb 13, 2009, at 3:43 AM, Neil Mitchell wrote:
Hi
Table is a table of name-value pairs I want to substitute in a tree- like structure using:
substitute :: Table -> Tree -> Tree
For substituting a single name-value pair I want to define this utitlity routine so I don't have to construct a Table all the time in the user code:
substitute :: String -> Value -> Tree -> Tree
Why not:
substituteValue :: String -> Value -> Tree -> Tree substituteValue x y = substitute (table1 x y)
In the case I believe it would certainly be good to be able to name both functions the same, but I fear I can not do so? There are languages where this is explicitelly allowed (e.g. C++ or Java), so I don't think it is such an unuseful or evil thing.
Languages like C++ and Java allow mutable state, object-orientated programming and require massively verbose code - all of which are unuseful and evil :-)
I think this is a case of trying to apply C++/Java thoughts on to Haskell, you can map the concepts directly, but you really shouldn't. Try writing multiple methods with many names, or simple utility functions to convert between the cases, and it will go much nicer.
Thanks
Neil _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Fri, 13 Feb 2009, John A. De Goes wrote:
In any case, no one has really addressed the original poster's question: No, "name overloading" is not possible in Haskell, and surprisingly, there are no blocking technical issues why this must be the case.
Prefixing names with module names is good style: http://www.haskell.org/haskellwiki/Qualified_names Where is the need for more overloading?

The signal-to-noise ratio with fully qualified names/operators goes way down -- that's the need. Go take one of your programs and fully qualify every name and every operator. Doesn't look so pretty then, does it? And it wouldn't be easy to read, either. Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101 On Feb 13, 2009, at 9:37 AM, Henning Thielemann wrote:
On Fri, 13 Feb 2009, John A. De Goes wrote:
In any case, no one has really addressed the original poster's question: No, "name overloading" is not possible in Haskell, and surprisingly, there are no blocking technical issues why this must be the case.
Prefixing names with module names is good style: http://www.haskell.org/haskellwiki/Qualified_names Where is the need for more overloading?

Why do you say "every name and operator" ? Why do you say "fully qualified" ?
When there is some clash, hiding the offending name or importing
"qualified as" is
quite satisfying imho.
Thu
2009/2/13 John A. De Goes
The signal-to-noise ratio with fully qualified names/operators goes way down -- that's the need.
Go take one of your programs and fully qualify every name and every operator. Doesn't look so pretty then, does it? And it wouldn't be easy to read, either.
Regards,
John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration
http://www.n-brain.net | 877-376-2724 x 101
On Feb 13, 2009, at 9:37 AM, Henning Thielemann wrote:
On Fri, 13 Feb 2009, John A. De Goes wrote:
In any case, no one has really addressed the original poster's question: No, "name overloading" is not possible in Haskell, and surprisingly, there are no blocking technical issues why this must be the case.
Prefixing names with module names is good style: http://www.haskell.org/haskellwiki/Qualified_names Where is the need for more overloading?
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hi
Chances are the program you're using to write your e-mails was written in C++ (or at least C), so don't knock it. :-)
Firefox (Javascript + C++) and Gmail (Python, so I think I read, no doubt with C underneath somewhere). However, I am sat writing C++ at the moment - which I think gives me the right to say that C++ is a bloated and ugly language.
In any case, no one has really addressed the original poster's question: No, "name overloading" is not possible in Haskell, and surprisingly, there are no blocking technical issues why this must be the case.
Name overloading is not possible currently. You could encode name overloading as type classes internally and add the feature, but it complicates type inference substantially. When I first started doing Haskell I remember asking why we didn't have overloaded names. Now, I ask the question why anyone could possibly want overloaded names. Having drunk the functional kool-aid I've decided they are deeply confusing :-) Thanks Neil
Hi
Table is a table of name-value pairs I want to substitute in a tree-like structure using:
substitute :: Table -> Tree -> Tree
For substituting a single name-value pair I want to define this utitlity routine so I don't have to construct a Table all the time in the user code:
substitute :: String -> Value -> Tree -> Tree
Why not:
substituteValue :: String -> Value -> Tree -> Tree substituteValue x y = substitute (table1 x y)
In the case I believe it would certainly be good to be able to name both functions the same, but I fear I can not do so? There are languages where this is explicitelly allowed (e.g. C++ or Java), so I don't think it is such an unuseful or evil thing.
Languages like C++ and Java allow mutable state, object-orientated programming and require massively verbose code - all of which are unuseful and evil :-)
I think this is a case of trying to apply C++/Java thoughts on to Haskell, you can map the concepts directly, but you really shouldn't. Try writing multiple methods with many names, or simple utility functions to convert between the cases, and it will go much nicer.
Thanks
Neil _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I come from a mathematical background (in which it is quite common to "overload" function names and operators in particular), so from my point of view, the lack of name overloading is a wart on Haskell. That such a feature would complicate type inference is more a concern to an implementor, not to an end-user of Haskell like myself. Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101 On Feb 13, 2009, at 10:16 AM, Neil Mitchell wrote:
Hi
Chances are the program you're using to write your e-mails was written in C++ (or at least C), so don't knock it. :-)
Firefox (Javascript + C++) and Gmail (Python, so I think I read, no doubt with C underneath somewhere). However, I am sat writing C++ at the moment - which I think gives me the right to say that C++ is a bloated and ugly language.
In any case, no one has really addressed the original poster's question: No, "name overloading" is not possible in Haskell, and surprisingly, there are no blocking technical issues why this must be the case.
Name overloading is not possible currently. You could encode name overloading as type classes internally and add the feature, but it complicates type inference substantially. When I first started doing Haskell I remember asking why we didn't have overloaded names. Now, I ask the question why anyone could possibly want overloaded names. Having drunk the functional kool-aid I've decided they are deeply confusing :-)
Thanks
Neil
Hi
Table is a table of name-value pairs I want to substitute in a tree-like structure using:
substitute :: Table -> Tree -> Tree
For substituting a single name-value pair I want to define this utitlity routine so I don't have to construct a Table all the time in the user code:
substitute :: String -> Value -> Tree -> Tree
Why not:
substituteValue :: String -> Value -> Tree -> Tree substituteValue x y = substitute (table1 x y)
In the case I believe it would certainly be good to be able to name both functions the same, but I fear I can not do so? There are languages where this is explicitelly allowed (e.g. C++ or Java), so I don't think it is such an unuseful or evil thing.
Languages like C++ and Java allow mutable state, object-orientated programming and require massively verbose code - all of which are unuseful and evil :-)
I think this is a case of trying to apply C++/Java thoughts on to Haskell, you can map the concepts directly, but you really shouldn't. Try writing multiple methods with many names, or simple utility functions to convert between the cases, and it will go much nicer.
Thanks
Neil _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Fri, 2009-02-13 at 11:12 -0700, John A. De Goes wrote:
I come from a mathematical background (in which it is quite common to "overload" function names and operators in particular)
Usually `when no ambiguity can arise', no? Plenty of mathematical practice rests on imprecision and the expectation that the human reader will understand what you mean. Haskell has to be understandable by the machine (which is less forgiving, but also more reasonable!) as well.
, so from my point of view, the lack of name overloading is a wart
What? Are you sure of your lexical choice here?
on Haskell. That such a feature would complicate type inference is more a concern to an implementor, not to an end-user of Haskell like myself.
Unless you, say, enjoy having type inference or something. jcc

On Feb 13, 2009, at 11:23 AM, Jonathan Cast wrote:
Usually `when no ambiguity can arise', no? Plenty of mathematical practice rests on imprecision and the expectation that the human reader will understand what you mean. Haskell has to be understandable by the machine (which is less forgiving, but also more reasonable!) as well.
Yes, and name overloading is decidable for machines as well, as the feature exists in numerous languages, and from time to time, we hear talk of the feature for Haskell, as well.
Unless you, say, enjoy having type inference or something.
Name overloading and type inference are not incompatible -- the issue has been discussed here before, though I'm too lazy to dig up the conversation. Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101

On Fri, 2009-02-13 at 11:29 -0700, John A. De Goes wrote:
On Feb 13, 2009, at 11:23 AM, Jonathan Cast wrote:
Usually `when no ambiguity can arise', no? Plenty of mathematical practice rests on imprecision and the expectation that the human reader will understand what you mean. Haskell has to be understandable by the machine (which is less forgiving, but also more reasonable!) as well.
Yes, and name overloading is decidable for machines as well, as the feature exists in numerous languages,
Do those languages have full HDM type inference? Do they have principle types? Are their principle types actually usable from the programmer's perspective? Those are the *bare minimum* requirements.
and from time to time, we hear talk of the feature for Haskell, as well.
I here jabbering all the time. I try to tune most of it out.
Unless you, say, enjoy having type inference or something.
Name overloading and type inference are not incompatible -- the issue has been discussed here before,
I believe the last time it was brought up, the proposal was that type inference should fail on certain typeable terms. That doesn't count. jcc

On Feb 13, 2009, at 11:32 AM, Jonathan Cast wrote:
I believe the last time it was brought up, the proposal was that type inference should fail on certain typeable terms. That doesn't count.
I'm referring to a rather conservative proposal wherein if there is one and exactly one definition that allows an expression to type, then name overloading in the same scope is permitted. Aside from exponential performance in pathological (but unlikely) cases, what issue do you have with such a proposal? Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101

On Fri, 2009-02-13 at 11:45 -0700, John A. De Goes wrote:
On Feb 13, 2009, at 11:32 AM, Jonathan Cast wrote:
I believe the last time it was brought up, the proposal was that type inference should fail on certain typeable terms. That doesn't count.
I'm referring to a rather conservative proposal wherein if there is one and exactly one definition that allows an expression to type, then name overloading in the same scope is permitted.
Aside from exponential performance in pathological (but unlikely) cases, what issue do you have with such a proposal?
It breaks type inference. I explained this at the time. I can explain it again: import Data.List import Data.Set import Data.Map warmFuzzyThingFirstOperation = map This gives an error currently. Quite properly. But if *any* use of `map' type-checks, with those imports, why on earth should this one fail? You don't want to remove a wart from the language, you want to introduce one! jcc

Am Freitag, 13. Februar 2009 19:49 schrieb Jonathan Cast:
On Fri, 2009-02-13 at 11:45 -0700, John A. De Goes wrote:
On Feb 13, 2009, at 11:32 AM, Jonathan Cast wrote:
I believe the last time it was brought up, the proposal was that type inference should fail on certain typeable terms. That doesn't count.
I'm referring to a rather conservative proposal wherein if there is one and exactly one definition that allows an expression to type, then name overloading in the same scope is permitted.
Aside from exponential performance in pathological (but unlikely) cases, what issue do you have with such a proposal?
It breaks type inference. I explained this at the time. I can explain it again:
import Data.List import Data.Set import Data.Map
warmFuzzyThingFirstOperation = map
To do justice to the above proposal, in that situation more than one choice would typecheck (were the other imports absent or qualified), so that should also be rejected according to it. I believe what is desired is to be able to write thingummybob :: Ord a => [a] -> (Int,Set a) thingummybob xs = let st = fromList xs in (size st, st) with several 'fromLists's in scope. I think it wouldn't be worth the hassle to implement that, but otherwise I agree it'd not (necessarily) be a bad thing.
This gives an error currently. Quite properly. But if *any* use of `map' type-checks, with those imports, why on earth should this one fail? You don't want to remove a wart from the language, you want to introduce one!
jcc
Cheers, Daniel

On Fri, 2009-02-13 at 20:06 +0100, Daniel Fischer wrote:
Am Freitag, 13. Februar 2009 19:49 schrieb Jonathan Cast:
On Fri, 2009-02-13 at 11:45 -0700, John A. De Goes wrote:
On Feb 13, 2009, at 11:32 AM, Jonathan Cast wrote:
I believe the last time it was brought up, the proposal was that type inference should fail on certain typeable terms. That doesn't count.
I'm referring to a rather conservative proposal wherein if there is one and exactly one definition that allows an expression to type, then name overloading in the same scope is permitted.
Aside from exponential performance in pathological (but unlikely) cases, what issue do you have with such a proposal?
It breaks type inference. I explained this at the time. I can explain it again:
import Data.List import Data.Set import Data.Map
warmFuzzyThingFirstOperation = map
To do justice to the above proposal, in that situation more than one choice would typecheck (were the other imports absent or qualified), so that should also be rejected according to it.
Yeah, my objection is precisely that this trivial example is rejected. If this use of map is rejected, then I claim *every* use of map should be rejected. jcc

Am Freitag, 13. Februar 2009 20:06 schrieb Jonathan Cast:
On Fri, 2009-02-13 at 20:06 +0100, Daniel Fischer wrote:
Am Freitag, 13. Februar 2009 19:49 schrieb Jonathan Cast:
It breaks type inference. I explained this at the time. I can explain it again:
import Data.List import Data.Set import Data.Map
warmFuzzyThingFirstOperation = map
This gives an error currently. Quite properly. But if *any* use of `map' type-checks, with those imports, why on earth should this one fail?
To do justice to the above proposal, in that situation more than one choice would typecheck (were the other imports absent or qualified), so that should also be rejected according to it.
Yeah, my objection is precisely that this trivial example is rejected. If this use of map is rejected, then I claim *every* use of map should be rejected.
Okay, why? If warmFuzzyThingFirstOperation were accepted, it would have the type (forall a b. (a -> b) -> [a] -> [b]) \/ (forall k a b. Ord k => (a -> b) -> Map k a -> Map k b) \/ (forall a b. (Ord a, Ord b) => (a -> b) -> Set a -> Set b) Looks kind of ambiguous, doesn't it? I would rather not allow that. But if we have take 5 (map (const True) [0,1,1,2,3,5,8,13,21,34,55]) we can infer that the 'map' used here must have a type unifyable with Num a => (b -> Bool) -> [a] -> [c] and only one of the 'map's in scope has such a type, so we must pick that. Doesn't look sooo evil at first. However, let us remove some information, what about take 5 . map (const True) ? Well, still easy, we must unify with (a -> b) -> c -> [d], only one possibility, fine. Or is it? What if we have another 'take' in scope? Say take :: Int -> Set a -> Set a ? Oops. So, where draw the line?
jcc
Bottom line, allowing that sort of overloading would at least be very ad-hoccish, and probably a bad thing. Thanks, Daniel

On Feb 13, 2009, at 1:38 PM, Daniel Fischer wrote:
? Well, still easy, we must unify with (a -> b) -> c -> [d], only one possibility, fine. Or is it? What if we have another 'take' in scope? Say take :: Int -> Set a -> Set a ? Oops. So, where draw the line?
You draw the line exactly when you cannot perform unambiguous typing.
Bottom line, allowing that sort of overloading would at least be very ad-hoccish, and probably a bad thing.
Why? All existing programs would type check and run without modification. So if you really like type class abuse and a billion pseudonyms for '+' (among others), then you would still have the option of developing in that style. On the other hand, if you wanted the machine to do what a human can (which is, deciding in completely unambiguous cases which of several definitions to use), then you'd be able to use name overloading and make some programs a lot more readable. Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101

On Feb 13, 2009, at 11:49 AM, Jonathan Cast wrote:
It breaks type inference. I explained this at the time. I can explain it again:
import Data.List import Data.Set import Data.Map
warmFuzzyThingFirstOperation = map
This gives an error currently. Quite properly. But if *any* use of `map' type-checks, with those imports, why on earth should this one fail? You don't want to remove a wart from the language, you want to introduce one!
Umm, no, that would still give an error. See definition of "one and exactly one". Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101

On Fri, 2009-02-13 at 12:06 -0700, John A. De Goes wrote:
On Feb 13, 2009, at 11:49 AM, Jonathan Cast wrote:
It breaks type inference. I explained this at the time. I can explain it again:
import Data.List import Data.Set import Data.Map
warmFuzzyThingFirstOperation = map
This gives an error currently. Quite properly. But if *any* use of `map' type-checks, with those imports, why on earth should this one fail? You don't want to remove a wart from the language, you want to introduce one!
Umm, no, that would still give an error. See definition of "one and exactly one".
Exactly! But if it fails, why on earth should any other use of map in the module succeed? jcc

On Feb 13, 2009, at 12:07 PM, Jonathan Cast wrote:
Exactly! But if it fails, why on earth should any other use of map in the module succeed?
Because more information is known about other usages of map. Such is the nature of type inference. If you wanted to go a step further, then I suppose you could see how warmFuzzyThingFirstOperation is used and if it can be typed in exactly one way. Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101

On Fri, 2009-02-13 at 12:15 -0700, John A. De Goes wrote:
On Feb 13, 2009, at 12:07 PM, Jonathan Cast wrote:
Exactly! But if it fails, why on earth should any other use of map in the module succeed?
Because more information is known about other usages of map. Such is the nature of type inference.
No it's not. Type inference -- in Haskell --- means --- by definition! --- looking up the principle type of each sub-term, specializing it based on its use, and then generalizing to find the principle type of the overall term. Adding information can cause type inference to fail, but --- in Haskell as it exists --- it cannot cause type inference to succeed. Which is good! jcc

In your own subjective opinion, which is not shared by many other Haskellers, myself included. Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101 On Feb 13, 2009, at 1:08 PM, Jonathan Cast wrote:
On Fri, 2009-02-13 at 12:15 -0700, John A. De Goes wrote:
On Feb 13, 2009, at 12:07 PM, Jonathan Cast wrote:
Exactly! But if it fails, why on earth should any other use of map in the module succeed?
Because more information is known about other usages of map. Such is the nature of type inference.
No it's not. Type inference -- in Haskell --- means --- by definition! --- looking up the principle type of each sub-term, specializing it based on its use, and then generalizing to find the principle type of the overall term. Adding information can cause type inference to fail, but --- in Haskell as it exists --- it cannot cause type inference to succeed. Which is good!
jcc

Am Freitag, 13. Februar 2009 21:08 schrieb Jonathan Cast:
On Fri, 2009-02-13 at 12:15 -0700, John A. De Goes wrote:
On Feb 13, 2009, at 12:07 PM, Jonathan Cast wrote:
Exactly! But if it fails, why on earth should any other use of map in the module succeed?
Because more information is known about other usages of map. Such is the nature of type inference.
No it's not. Type inference -- in Haskell --- means --- by definition! --- looking up the principle type of each sub-term, specializing it based on its use, and then generalizing to find the principle type of the overall term. Adding information can cause type inference to fail, but --- in Haskell as it exists --- it cannot cause type inference to succeed.
I'm not sure about the finer distinctions between type inference and type checking as performed by Haskell implementations when compiling a module, but what about polymorphic recursion, where adding information via a type signature can be necessary to make the compilation succeed? Not what this thread is about, though.
Which is good!
Why is it good? Because using additional information to make type inference succeed would cause an ad hoc and hard to reason about type inference algorithm? Or other reasons?
jcc
Cheers, Daniel

On Fri, 2009-02-13 at 21:57 +0100, Daniel Fischer wrote:
Am Freitag, 13. Februar 2009 21:08 schrieb Jonathan Cast:
On Fri, 2009-02-13 at 12:15 -0700, John A. De Goes wrote:
On Feb 13, 2009, at 12:07 PM, Jonathan Cast wrote:
Exactly! But if it fails, why on earth should any other use of map in the module succeed?
Because more information is known about other usages of map. Such is the nature of type inference.
No it's not. Type inference -- in Haskell --- means --- by definition! --- looking up the principle type of each sub-term, specializing it based on its use, and then generalizing to find the principle type of the overall term. Adding information can cause type inference to fail, but --- in Haskell as it exists --- it cannot cause type inference to succeed.
I'm not sure about the finer distinctions between type inference and type checking as performed by Haskell implementations when compiling a module, but what about polymorphic recursion, where adding information via a type signature can be necessary to make the compilation succeed?
Um, sort of. Adding --- or relaxing --- a type signature on a function you *call* can make typing succeed when it would have failed. But take the recursion out of polymorphic recursion and it does become problematic, yes. For much the same reason the monomorphism restriction is problematic, actually.
Not what this thread is about, though.
Which is good!
Why is it good?
The compiler should fail when you tell it two mutually contradictory things, and only when you tell it two mutually contradictory things. Adding information cannot remove a contradiction from the information set available to the compiler. Therefore it should not stop the compiler from failing. And that is all I will say on this subject. jcc

On Feb 13, 2009, at 2:11 PM, Jonathan Cast wrote:
The compiler should fail when you tell it two mutually contradictory things, and only when you tell it two mutually contradictory things.
By definition, it's not a contradiction when the symbol is unambiguously typeable. Do you think math textbooks are filled with contradictions when they give '+' a different meaning for vectors than matrices or real numbers??? Type is implicitly or explicitly a part of the definition of every function. It's not the name that need be unique, but the name over a given domain. When two functions have different domains, the same name can be unambiguously used to describe both of them.
Adding information cannot remove a contradiction from the information set available to the compiler.
But it can and often does, for example, for [] or 4. What's the type of either expression without more information? Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101

On Fri, Feb 13, 2009 at 22:37, John A. De Goes
On Feb 13, 2009, at 2:11 PM, Jonathan Cast wrote:
The compiler should fail when you tell it two mutually contradictory things, and only when you tell it two mutually contradictory things.
By definition, it's not a contradiction when the symbol is unambiguously typeable. Do you think math textbooks are filled with contradictions when they give '+' a different meaning for vectors than matrices or real numbers???
I can easily imagine a book which uses some operator in ambiguous way yet relies on readers' intelligence in solving that issue. It is OK to do that as long as it is easy. However: it can get arbitrarily worse. I would consider any book which is hard to read because of that badly written. Things are quite similar with the code.
Type is implicitly or explicitly a part of the definition of every function. It's not the name that need be unique, but the name over a given domain. When two functions have different domains, the same name can be unambiguously used to describe both of them.
I think the whole point is not about what is and what isn't possible to implement. For example GHC often can do just fine with undecidable instances despite the problems they may cause. Programming language should be easy to reason about for both computers and humans. Compiler should therefore disallow programming style that is inaccessible for potential readers. Want to overload something? Well, use typeclasses to be explicit about it. All best Christopher Skrzętnicki

On Feb 13, 2009, at 6:31 PM, Krzysztof Skrzętnicki wrote:
On Fri, Feb 13, 2009 at 22:37, John A. De Goes
wrote: On Feb 13, 2009, at 2:11 PM, Jonathan Cast wrote:
The compiler should fail when you tell it two mutually contradictory things, and only when you tell it two mutually contradictory things.
By definition, it's not a contradiction when the symbol is unambiguously typeable. Do you think math textbooks are filled with contradictions when they give '+' a different meaning for vectors than matrices or real numbers???
I can easily imagine a book which uses some operator in ambiguous way yet relies on readers' intelligence in solving that issue. It is OK to do that as long as it is easy. However: it can get arbitrarily worse.
Don't overlook the advantages of using familiar operators and names: you have some intuition about '+' and 'map', so if you see them, then you'll have some idea what they do (assuming the author is neither stupid nor malicious). However, if you see some operator like '$>+' or some name like 'pp3', then you probably won't have any intuition about it. Writing good software is about conveying intentions, and part of the way we can do that is relying on what other people already know. Which means using familiar names and operators when it is helpful to do so.
I would consider any book which is hard to read because of that badly written. Things are quite similar with the code.
I consider the current state of affairs quite poor: namely, abuse of type classes and alternate names and operators that aren't very suggestive, but were chosen purely to avoid conflicts.
Programming language should be easy to reason about for both computers and humans. Compiler should therefore disallow programming style that is inaccessible for potential readers. Want to overload something? Well, use typeclasses to be explicit about it.
Type classes were not designed for name overloading. They're designed to factor out common patterns in programming. You shouldn't use a type class just because you want to use a name or operator. And as I said before, if you want to "disallow programming style that is inaccessible for potential readers," then you should disallow the current state of affairs. Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101

On Sat, Feb 14, 2009 at 7:56 AM, John A. De Goes
Don't overlook the advantages of using familiar operators and names: you have some intuition about '+' and 'map', so if you see them, then you'll have some idea what they do (assuming the author is neither stupid nor malicious). However, if you see some operator like '$>+' or some name like 'pp3', then you probably won't have any intuition about it.
Writing good software is about conveying intentions, and part of the way we can do that is relying on what other people already know. Which means using familiar names and operators when it is helpful to do so.
Keep in mind that such intuitions often have a formalization. We have an intuition about what "map" means. When we dig deep and try to write down what that intuition is, the following appears: map id = id map (f . g) = map f . map g Now that we have an operation and laws, it is reasonable to use a typeclass. To me, typeclasses are at their best when you have a real abstraction to encode. If you are having trouble using a typeclass and need C++-style ad-hoc overloading, it's likely you are trying to encode a "fake" abstraction -- one that has only linguistic, rather than mathematical meaning. Haskell is not an isolated linguist. Her low tolerance for vagueness strikes again. Luke
I would
consider any book which is hard to read because of that badly written. Things are quite similar with the code.
I consider the current state of affairs quite poor: namely, abuse of type classes and alternate names and operators that aren't very suggestive, but were chosen purely to avoid conflicts.
Programming language should be easy to reason about for both computers
and humans. Compiler should therefore disallow programming style that is inaccessible for potential readers. Want to overload something? Well, use typeclasses to be explicit about it.
Type classes were not designed for name overloading. They're designed to factor out common patterns in programming. You shouldn't use a type class just because you want to use a name or operator.
And as I said before, if you want to "disallow programming style that is inaccessible for potential readers," then you should disallow the current state of affairs.
Regards,
John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration
http://www.n-brain.net | 877-376-2724 x 101
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Feb 14, 2009, at 2:29 PM, Luke Palmer wrote:
To me, typeclasses are at their best when you have a real abstraction to encode.
I agree.
If you are having trouble using a typeclass and need C++-style ad- hoc overloading, it's likely you are trying to encode a "fake" abstraction -- one that has only linguistic, rather than mathematical meaning.
I don't think what you're calling a "linguistic" abstraction is "fake". Words and operators have connotations in the minds of those reading them. There's a reason I choose the word, 'flatten' rather than 'sdj834' to name a function -- because the word 'flatten' suggests what I am trying to accomplish with the function. Let's try a little test: 1. If the parameter is a tree, what do you think "flatten" would do? 2. If the parameter is a list, what do you think "flatten" would do? 3. If the parameter is a Style (possible a composite Style consisting of other styles), what do you think "flatten" would do? 4. If the parameter is a Bezier curve, what do you think "flatten" would do? My guess is that we would come to the same conclusions for (1) - (4). The name "flatten" is a perfectly good name for all of these operations, because the domains are distinct, and because using that name suggests the correct meaning to you. (Note use of the word "suggests" -- like an analogy or parable, you're likely not going to know exactly what the function does just by reading its name, but you'll be in the ballpark and have an intuition about it, which is extremely valuable.) In the current world, "flatten" will be appended with (usually non- informative) suffixes, or alternate, less-descriptive names chosen. Informative coding is about drawing upon our common pool of experience to mold the form and function of programs to suggest our intentions to others. Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101

On Thu, Feb 19, 2009 at 7:09 AM, John A. De Goes
On Feb 14, 2009, at 2:29 PM, Luke Palmer wrote:
To me, typeclasses are at their best when you have a real abstraction to encode.
I agree.
If you are having trouble using a typeclass and need C++-style ad-hoc
overloading, it's likely you are trying to encode a "fake" abstraction -- one that has only linguistic, rather than mathematical meaning.
I don't think what you're calling a "linguistic" abstraction is "fake".
Please ignore the word "fake". I don't want to get into any subjective arguments based on the connotation of that word. What I mean to say is, the theory of typeclasses is good at encoding mathematical abstractions, and bad at encoding linguistic ones. Take that as you will, but I conjecture that trying to cram linguistic overloading into a typeclass is generally going to be painful. A good rule of thumb is: are there any algorithms which work for an arbitrary member of this class? I certainly cannot see any for your flatten example. I'm not saying that linguistic overloading is a bad thing. You make good arguments for it, and I find it cleans up code sometimes. Typeclasses just aren't the right tool for it, and Haskell has no good tool for it. In fact, I think it a very interesting research question to come up with a mechanism that supports linguistic overloading, and interacts with typeclasses and inference cleanly. The obvious solution (just look in your namespace for one that matches) has serious drawbacks, and nothing else is jumping to mind.

I agree that type classes should not be used for this purpose. That's part of the reason I support "linguistic overloading" -- to stop the type class abuse. Type classes should be used, as you say, when there are algorithms that work for arbitrary members -- i.e. the type class encodes structure and associated properties. Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101 On Feb 19, 2009, at 7:33 AM, Luke Palmer wrote:
On Thu, Feb 19, 2009 at 7:09 AM, John A. De Goes
wrote: On Feb 14, 2009, at 2:29 PM, Luke Palmer wrote: To me, typeclasses are at their best when you have a real abstraction to encode.
I agree.
If you are having trouble using a typeclass and need C++-style ad- hoc overloading, it's likely you are trying to encode a "fake" abstraction -- one that has only linguistic, rather than mathematical meaning.
I don't think what you're calling a "linguistic" abstraction is "fake".
Please ignore the word "fake". I don't want to get into any subjective arguments based on the connotation of that word.
What I mean to say is, the theory of typeclasses is good at encoding mathematical abstractions, and bad at encoding linguistic ones. Take that as you will, but I conjecture that trying to cram linguistic overloading into a typeclass is generally going to be painful.
A good rule of thumb is: are there any algorithms which work for an arbitrary member of this class? I certainly cannot see any for your flatten example.
I'm not saying that linguistic overloading is a bad thing. You make good arguments for it, and I find it cleans up code sometimes. Typeclasses just aren't the right tool for it, and Haskell has no good tool for it.
In fact, I think it a very interesting research question to come up with a mechanism that supports linguistic overloading, and interacts with typeclasses and inference cleanly. The obvious solution (just look in your namespace for one that matches) has serious drawbacks, and nothing else is jumping to mind.

On Feb 19, 2009, at 9:09 AM, John A. De Goes wrote:
Let's try a little test:
1. If the parameter is a tree, what do you think "flatten" would do?
I would imagine that it would be "join" on trees -- i.e. take a tree of trees and turn it into a tree. But perhaps it would be arbitrarily deep. Or perhaps it would turn a tree into a list, but I would of course prefer toList for that.
2. If the parameter is a list, what do you think "flatten" would do?
Since a list is already flat, it couldn't possibly turn it into a list -- but again, it could perhaps be "join" or it could be "deepJoin" or maybe if it was a list of numbers that represented a signal, it would smooth out peaks beyond standard deviation, but maybe it would do that treating the numbers as a time series or perhaps it would do that treating the numbers as coefficient to a series of trigonometric functions.
3. If the parameter is a Style (possible a composite Style consisting of other styles), what do you think "flatten" would do?
If the style included colors, perhaps it would mute them? Or, maybe, it would mean, as you intended, join. And if a style may consist of other styles, then isn't it, properly speaking, a variant of a tree to begin with?
4. If the parameter is a Bezier curve, what do you think "flatten" would do?
One would imagine, produce a straight line. But would this line be between the original endpoints, or would it be a projection onto the horizontal axis?
My guess is that we would come to the same conclusions for (1) - (4). The name "flatten" is a perfectly good name for all of these operations, because the domains are distinct, and because using that name suggests the correct meaning to you. (Note use of the word "suggests" -- like an analogy or parable, you're likely not going to know exactly what the function does just by reading its name, but you'll be in the ballpark and have an intuition about it, which is extremely valuable.)
In three cases, depending on what you intended "flatten" to mean (I've been a bit provocative, but honestly I do have no idea), then the domains may not be distinct, because "join" is an operation that they do properly have in common. And no, the name doesn't suggest one clear meaning -- what does, generally, suggest a single clear meaning to me, is a good type signature, but since I could no longer query :t "flatten" in GHCI and get a single response, I would be, I'm afraid, somewhat at sea -- even more so if in some cases flatten was a typeclass operation, and in others not. Or, worse yet, if flatten was declared as a typeclass operation in one place, defined on lists in another, and in a third, lists were given an orphaned Flattenable instance. Cheers, Sterl.

-- John A. De Goes wrote:
Adding information cannot remove a contradiction from the information set available to the compiler.
But it can and often does, for example, for [] or 4. What's the type of either expression without more information?
[] :: [a] 4 :: Num a => a Do I win something?

Take, for example, this function: f :: [Char] -> Char f [] = chr 0 f (c:cs) = chr (ord c + ord (f cs)) [] is typed as [Char], even though it could be typed in infinitely many other ways. Demonstrating yet again, that the compiler *does* use the additional information that it gathers to assist with typing. Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101 On Feb 13, 2009, at 6:31 PM, Robert Greayer wrote:
-- John A. De Goes wrote:
Adding information cannot remove a contradiction from the information set available to the compiler.
But it can and often does, for example, for [] or 4. What's the type of either expression without more information?
[] :: [a]
4 :: Num a => a
Do I win something?

John A. De Goes wrote:
Take, for example, this function:
f :: [Char] -> Char
f [] = chr 0 f (c:cs) = chr (ord c + ord (f cs))
[] is typed as [Char], even though it could be typed in infinitely many other ways. Demonstrating yet again, that the compiler *does* use the additional information that it gathers to assist with typing.
I'm not sure about this example, since [] occurs in a pattern here, and
I don't know how typing affects patterns. However, you seem to believe
that in the expression
'x' : []
the subexpression [] has type [Char]. That is not correct, though. This
occurence and every occurence of [] has type (forall a . [a]). This
becomes clearer if one uses a calculus wih explicit type abstraction and
application, like ghc does in its Core language. In such a calculus, we
have a uppercase lambda "/\ <type var> -> <term>" which binds type
parameters, and a type application "<term> <type>" similar to the
lowercase lambda "\ <var> -> <term>" and term application "<term>

John A. De Goes wrote:
On Feb 13, 2009, at 2:11 PM, Jonathan Cast wrote:
The compiler should fail when you tell it two mutually contradictory things, and only when you tell it two mutually contradictory things.
By definition, it's not a contradiction when the symbol is unambiguously typeable. Do you think math textbooks are filled with contradictions when they give '+' a different meaning for vectors than matrices or real numbers???
Yes. Yes, I do. It is precisely this abuse of notation which makes, for instance, statistics textbooks impossible to read (without already knowing the material). Scalars, vectors, and matrices are fundamentally different here and the operations on them should be unambiguous, regardless of context. When reading a machine learning algorithm it should *never* be a question whether something is scalar or not. Ambiguity is a bug. Replacing one kind for another is almost always wrong. For another example, consider matrices vs their transposes. Many folks can't be bothered to type a single character to clarify when things should be transposed before multiplying. No matter how quickly someone can test the equation to verify it, leaving that information off makes the equation simply wrong. And it's not as if square matrices aren't ubiquitous. -- Live well, ~wren

On Feb 14, 2009, at 11:28 PM, wren ng thornton wrote:
John A. De Goes wrote:
The compiler should fail when you tell it two mutually contradictory things, and only when you tell it two mutually contradictory things. By definition, it's not a contradiction when the symbol is unambiguously typeable. Do you think math textbooks are filled with contradictions when they give '+' a different meaning for vectors
On Feb 13, 2009, at 2:11 PM, Jonathan Cast wrote: than matrices or real numbers???
Yes. Yes, I do.
If you really think you have discovered a contradiction in tens of thousands of mathematical textbooks, then you should write a paper and submit it to the AJM. Now me, I DON'T think you've discovered a contradiction. I don't even think YOU believe that. Rather, you're fixated on using a unique, precise meaning for each symbol. Somehow this is associated for you with some notion of "purity". But I'm guessing, if I take a look at all the source code you have ever written in your entire life, you will not have as many unique symbols as you have functions and operators. You probably reuse names and operators just like the rest of us.
It is precisely this abuse of notation which makes, for instance, statistics textbooks impossible to read (without already knowing the material).
Hmmm, I don't find statistics books difficult to read.
Scalars, vectors, and matrices are fundamentally different here and the operations on them should be unambiguous, regardless of context.
It's customary to use a unique typeface and/or font for each domain. So you know the type of the variables by inspection, and the meaning of the operators flows from that. Matrices, for example, are generally denoted in all uppercase (smallcaps), with an italic font, and often with the letters 'M' or 'N' and subscripts. Vectors are usually all lower-case and italic, sometimes with tiny arrows above them, and usually they're represented with the letters u, v, and w (and subscripted versions thereof). With unique domains, reuse of the symbols such as '+' for vector and matrix addition is unambiguous and perfectly sensible because it suggests that at some level, the operation reduces to scalar addition (which is correct). Compare that to using unique symbols for every possible operator and function. That would be brain overload because you would have to memorize each symbol and function separately.
For another example, consider matrices vs their transposes. Many folks can't be bothered to type a single character to clarify when things should be transposed before multiplying.
Now that's just plain sloppiness, and is quite orthogonal to this discussion. Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101

On 20 Feb 2009, at 3:24 am, John A. De Goes wrote:
It is precisely this abuse of notation which makes, for instance, statistics textbooks impossible to read (without already knowing the material).
Hmmm, I don't find statistics books difficult to read.
I'm interested in a technique called Correspondence Analysis. I recently bought a book about it. The author comes from the UK, but was trained in CA at its home in France. And the book is very nearly unreadable, thanks in part to some of the strangest overloading I've seen. I don't have it handy, but things like f(i) and f(j) having different types is just the start. (No, this is not dependent typing. Which type the result is depends on the *name* of the argument, not its value!)

"John A. De Goes"
I'm referring to a rather conservative proposal wherein if there is one and exactly one definition that allows an expression to type, then name overloading in the same scope is permitted.
Perhaps this was discussed in the context of records and field accessors? -k -- If I haven't seen further, it is by standing in the footprints of giants

On Fri, Feb 13, 2009 at 1:29 PM, John A. De Goes
On Feb 13, 2009, at 11:23 AM, Jonathan Cast wrote:
Usually `when no ambiguity can arise', no? Plenty of mathematical practice rests on imprecision and the expectation that the human reader will understand what you mean. Haskell has to be understandable by the machine (which is less forgiving, but also more reasonable!) as well.
Yes, and name overloading is decidable for machines as well, as the feature exists in numerous languages, and from time to time, we hear talk of the feature for Haskell, as well.
ML uses name overloading for + and *. Type classes were originally
invented as a more principled way of dealing with ad-hoc overloading
like that.
I suspect that you can use type classes for any example of name
overloading, if you're willing to turn on enough extensions in GHC.
If we want an expression's value to be completely determined by its
type, we can just do something like this:
class MapFunction a where
map :: a
instance MapFunction ((a -> b) -> [a] -> [b]) where ...
instance (Ord a, Ord b) => MapFunction ((a -> b) -> Set a -> Set b) where ...
--
Dave Menendez

On Fri, 2009-02-13 at 11:40 +0100, Daniel Kraft wrote:
Colin Adams wrote:
If you have two functions that do two different things, then they certainly OUGHT to have different names.
Well, they do "the same thing" but for different arguments; it's like this:
Table is a table of name-value pairs I want to substitute in a tree-like structure using:
substitute :: Table -> Tree -> Tree
For substituting a single name-value pair I want to define this utitlity routine so I don't have to construct a Table all the time in the user code:
substitute :: String -> Value -> Tree -> Tree
You can write it like this: class Substitutable a where substitute :: a -> Tree -> Tree instance Substitutable Table where ... instance Substitutable (String, Value) where ...

Hi Daniel,
A more functional approach might be:
type Substitution = String -> Maybe Value
single :: String -> Value -> Substitution
table :: Table -> Substitution
substitute :: Substitution -> Tree -> Tree
For better performance and a lot more features, you could switch to
type Substitution = Data.Map String Value
- Conal
On Fri, Feb 13, 2009 at 2:40 AM, Daniel Kraft
Colin Adams wrote:
If you have two functions that do two different things, then they certainly OUGHT to have different names.
Well, they do "the same thing" but for different arguments; it's like this:
Table is a table of name-value pairs I want to substitute in a tree-like structure using:
substitute :: Table -> Tree -> Tree
For substituting a single name-value pair I want to define this utitlity routine so I don't have to construct a Table all the time in the user code:
substitute :: String -> Value -> Tree -> Tree
In the case I believe it would certainly be good to be able to name both functions the same, but I fear I can not do so? There are languages where this is explicitelly allowed (e.g. C++ or Java), so I don't think it is such an unuseful or evil thing.
Daniel
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Daniel Kraft wrote:
Is this kind of overloading (instead of the polymorphism based overloading) possible in Haskell? Namely to have two functions with the same name but different signatures so they could be distinguished by a call's parameters? I fear not... So I guess I have to name the functions differently, right?
Thanks for all the quick replies, I think I'm really going for different names :) But there were some nice ideas I will remember for the future, maybe they can be of some use for something else! Daniel

Daniel Kraft wrote:
Hi,
I just came across a problem like this: Suppose I've got two related functions that do similar things, and I want to call them the same... Like in:
foobar :: String -> Int -> Int foobar :: Int -> String -> Int
(Bad example, but I hope you got the point.)
http://www.haskell.org/haskellwiki/Type_classes_are_for_reusability
participants (21)
-
Colin Adams
-
Conal Elliott
-
Daniel Fischer
-
Daniel Kraft
-
David Menendez
-
Duncan Coutts
-
Eugene Kirpichov
-
George Pollard
-
Henning Thielemann
-
John A. De Goes
-
Jonathan Cast
-
Ketil Malde
-
Krzysztof Skrzętnicki
-
Luke Palmer
-
minh thu
-
Neil Mitchell
-
Richard O'Keefe
-
Robert Greayer
-
Sterling Clover
-
Tillmann Rendel
-
wren ng thornton