What does the Haskell type system do with "show (1+2)"?

What does the Haskell type system do with expressions such as these . . . ? show 1 show (1+2) The type of the subexpressions "1" and "1+2" are "ambiguous" since they have type "(Num a) => a". I'm under the assumption before "1+2" is evaluated, the "1" and "2" must be coerced into a "concrete" type such as Int, Integer, Double, etc, and before "show 1" is evaluated, the "1" must be coerced into a "concrete" type. Is my assumption correct? If so, how does Haskell know into which type to coerce the subexpressions? If I try to write a new function, "my_show", which converts an expression into a string representation that includes type information, I run into errors with expressions like "show 1" and "show (1+2)" because of the type ambiguity. class (Show a) => My_show a where my_show :: a -> String instance My_show Int where my_show a = show a ++ " :: Int" instance My_show Integer where my_show a = show a ++ " :: Integer" I can avoid the errors if I change it to "my_show (1::Int)" or "my_show ((1+2)::Int). I'm wondering what the difference is between, my_show and Haskell's built-in show that causes my_show to produce an error message when it is used with ambiguous types, but Haskell's show works okay with ambiguous types.

http://www.haskell.org/onlinereport/decls.html#default-decls
http://www.haskell.org/tutorial/numbers.html#sect10.4
On 1/12/06, Jeff.Harper@handheld.com
What does the Haskell type system do with expressions such as these . . . ? show 1 show (1+2)
The type of the subexpressions "1" and "1+2" are "ambiguous" since they have type "(Num a) => a". I'm under the assumption before "1+2" is evaluated, the "1" and "2" must be coerced into a "concrete" type such as Int, Integer, Double, etc, and before "show 1" is evaluated, the "1" must be coerced into a "concrete" type. Is my assumption correct? If so, how does Haskell know into which type to coerce the subexpressions?
If I try to write a new function, "my_show", which converts an expression into a string representation that includes type information, I run into errors with expressions like "show 1" and "show (1+2)" because of the type ambiguity.
class (Show a) => My_show a where my_show :: a -> String
instance My_show Int where my_show a = show a ++ " :: Int"
instance My_show Integer where my_show a = show a ++ " :: Integer"
I can avoid the errors if I change it to "my_show (1::Int)" or "my_show ((1+2)::Int). I'm wondering what the difference is between, my_show and Haskell's built-in show that causes my_show to produce an error message when it is used with ambiguous types, but Haskell's show works okay with ambiguous types.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- jupdike@gmail.com http://www.updike.org/~jared/ reverse ")-:"

Jared Updike wrote:
http://www.haskell.org/onlinereport/decls.html#default-decls http://www.haskell.org/tutorial/numbers.html#sect10.4
I still don't see, why it works for show but not for my_show.
On 1/12/06, Jeff.Harper@handheld.com
wrote: [...] class (Show a) => My_show a where my_show :: a -> String
If I let this be class My_show a where my_show :: a -> String
instance My_show Int where my_show a = show a ++ " :: Int"
instance My_show Integer where my_show a = show a ++ " :: Integer"
What is the difference to the builtin Show class? Christian

Am Freitag, 13. Januar 2006 11:12 schrieb Christian Maeder:
Jared Updike wrote:
http://www.haskell.org/onlinereport/decls.html#default-decls http://www.haskell.org/tutorial/numbers.html#sect10.4
I still don't see, why it works for show but not for my_show.
Says the report: Ambiguities in the class Num are most common, so Haskell provides another way to resolve them---with a default declaration: default (t1 , ... , tn) where n>=0, and each ti must be a type for which Num ti holds. In situations where an ambiguous type is discovered, an ambiguous type variable, v, is defaultable if: v appears only in constraints of the form C v, where C is a class, and at least one of these classes is a numeric class, (that is, Num or a subclass of Num), and all of these classes are defined in the Prelude or a standard library (Figures ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ That's the point! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 6.2--6.3, pages -- show the numeric classes, and Figure 6.1, page , shows the classes defined in the Prelude.) Each defaultable variable is replaced by the first type in the default list that is an instance of all the ambiguous variable's classes. It is a static error if no such type is found.
On 1/12/06, Jeff.Harper@handheld.com
wrote: [...]
class (Show a) => My_show a where my_show :: a -> String
If I let this be
class My_show a where my_show :: a -> String
instance My_show Int where my_show a = show a ++ " :: Int"
instance My_show Integer where my_show a = show a ++ " :: Integer"
What is the difference to the builtin Show class?
It's not declared in the prelude or standard libraries.
Christian
Now the question is, could that restriction be lifted, i.e., would it be possible/worthwhile to let defaulting also take place if user defined classes are involved? Cheers, Daniel

On 13/01/06, Daniel Fischer
Am Freitag, 13. Januar 2006 11:12 schrieb Christian Maeder:
Jared Updike wrote:
http://www.haskell.org/onlinereport/decls.html#default-decls http://www.haskell.org/tutorial/numbers.html#sect10.4
I still don't see, why it works for show but not for my_show.
Says the report: Ambiguities in the class Num are most common, so Haskell provides another way to resolve them---with a default declaration:
default (t1 , ... , tn)
where n>=0, and each ti must be a type for which Num ti holds. In situations where an ambiguous type is discovered, an ambiguous type variable, v, is defaultable if:
v appears only in constraints of the form C v, where C is a class, and at least one of these classes is a numeric class, (that is, Num or a subclass of Num), and all of these classes are defined in the Prelude or a standard library (Figures ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ That's the point! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 6.2--6.3, pages -- show the numeric classes, and Figure 6.1, page , shows the classes defined in the Prelude.) Each defaultable variable is replaced by the first type in the default list that is an instance of all the ambiguous variable's classes. It is a static error if no such type is found.
On 1/12/06, Jeff.Harper@handheld.com
wrote: [...]
class (Show a) => My_show a where my_show :: a -> String
If I let this be
class My_show a where my_show :: a -> String
instance My_show Int where my_show a = show a ++ " :: Int"
instance My_show Integer where my_show a = show a ++ " :: Integer"
What is the difference to the builtin Show class?
It's not declared in the prelude or standard libraries.
Christian
Now the question is, could that restriction be lifted, i.e., would it be possible/worthwhile to let defaulting also take place if user defined classes are involved?
So long as we're going to have a defaulting mechanism, it seems a bit odd to restrict it to Num, and to classes in the Prelude. It would be neat if this could be somewhat generalised, so that, say, the Haskell 98 defaulting behaviour could be completely specified by declarations in the Prelude, but it's a good question as to exactly how it should be generalised at all. It seems a bit tricky to come up with a way to specify the defaulting behaviour which is both general enough to express the current behaviour, and which doesn't result in conflicts in the orders in which default types are tried. The 'obvious' thing is to have a specification somewhat like: default C (t1,...,tn) where C is the name of a single parameter typeclass (or a multiparameter one, with all but one of the type variables applied; there ought to be a way to generalise to multi-parameter typeclasses if we can sort out the problems here) The problem comes when you have a type variable which is ambiguous, and to which multiple defaulting specifications apply. In which order do we try the defaults? We almost certainly wouldn't want it to depend, for example, on the order in which the class constraints in a type declaration were specified. We could pick the defaulting mechanism based on order of occurrence to some extent, but even this seems a little ugly. Also, how do these defaults interact with modules? If we want the system to be nice and general, it would be nice to have the behaviour with respect to Num specified in the Prelude and merely exported, but this potentially opens up another can of worms with regard to exporting defaults. If we always export them, it's unclear what happens with cyclic module dependency graphs. Perhaps there should be some provision to allow for writing something like "default C" in the module export/import lists. - Cale

Cale Gibbard wrote:
<Snip> So long as we're going to have a defaulting mechanism, it seems a bit odd to restrict it to Num, and to classes in the Prelude.
Instead of having literals such as 1 that could be Int, Integer, or Float etc, why not just have one Number type declared as something like: data Number = NInt Int | NInteger Integer | NFloat Float | NDouble Double | NRational Integer Integer | NComplex Number Number etc so that all numeric literals would just be translated by the compiler into a Number. Arithmetic ops would then not be overloaded but the compiler could hopefully optimize out the extra indirection caused by using Number instead of plain Int, Integer, etc. Regards, Brian.

On 13/01/06, Brian Hulley
Cale Gibbard wrote:
<Snip> So long as we're going to have a defaulting mechanism, it seems a bit odd to restrict it to Num, and to classes in the Prelude.
Instead of having literals such as 1 that could be Int, Integer, or Float etc, why not just have one Number type declared as something like:
data Number = NInt Int | NInteger Integer | NFloat Float | NDouble Double | NRational Integer Integer | NComplex Number Number
etc so that all numeric literals would just be translated by the compiler into a Number. Arithmetic ops would then not be overloaded but the compiler could hopefully optimize out the extra indirection caused by using Number instead of plain Int, Integer, etc.
Regards, Brian.
This is not very extensible though. You can't extend that type later with numeric types of your own (e.g. polynomials, power series, an arbitrary precision computable real type, or some finite field). Having a Num typeclass allows for users to add their own numeric types at will. Also, implicit coercions would be required to implement the operations on your type, which I'm not sure is such a good idea. The real problem is that the syntax is overloaded. Somehow I don't fancy the requirement to type NInteger 5 (respectively 5 :: Integer) whenever I want to refer to 5 in an arithmetic expression, and determining which constructor to apply automatically is the same problem as the defaulting mechanism solves. (Only, with the algebraic union, it must be solved at each and every numeric literal, rather than the comparatively fewer times it would need to be solved in general to make the system of type constraints determined.) Ambiguity is actually seldom a real problem in practice. Usually something comes along and fixes the type -- it's only really at the GHCi/Hugs prompt that this is usually a major issue. This is another reason perhaps why it hasn't been generalised. Probably the real reason that it's restricted to Num is that instances of Num are the only types with such overloaded syntax in the language. (I can write 5, and have it mean an Integer, Complex Double, a Rational, or a constant power series of some type I cooked up.) There are other situations where ambiguities can arise, but to be honest, I've never been too annoyed by them. Still, it sort of looks odd to have a feature which looks like it could be vastly more general, but which only applies to this one Prelude class Num, and only in contexts involving other Prelude classes. Thinking about it more is making it clearer why it is restricted to Num, though it's not obvious that it should remain that way forever. The original question posed by Daniel is whether we should lift the restriction that all of the classes involved are in a standard library when doing defaulting. That's such a subtle and odd point that there almost certainly is a reason for it. Does anyone know it? - Cale

Daniel Fischer wrote:
Now the question is, could that restriction be lifted, i.e., would it be possible/worthwhile to let defaulting also take place if user defined classes are involved?
The current defaulting mechanism in Haskell is very, very conservative. You could easily relax the restriction that all classes have to come from the Prelude. I have sometimes wished for that. But on the other hand, I try to avoid relying on defaulting anywhere, so it doesn't disturb me much. -- Lennart

On Thu, 12 Jan 2006 Jeff.Harper@handheld.com wrote:
What does the Haskell type system do with expressions such as these . . . ? show 1 show (1+2)
The type of the subexpressions "1" and "1+2" are "ambiguous" since they have type "(Num a) => a". I'm under the assumption before "1+2" is evaluated, the "1" and "2" must be coerced into a "concrete" type such as Int, Integer, Double, etc, and before "show 1" is evaluated, the "1" must be coerced into a "concrete" type. Is my assumption correct?
Yep. If you start ghc with -Wall option then it also tells you that some automatism is invoked.
participants (8)
-
Brian Hulley
-
Cale Gibbard
-
Christian Maeder
-
Daniel Fischer
-
Henning Thielemann
-
Jared Updike
-
Jeff.Harper@handheld.com
-
Lennart Augustsson