Proposal: Remove Show and Eq superclasses of Num

Hi all, I would like to propose that we remove the Show and Eq superclasses from Num, i.e. change class (Eq a, Show a) => Num a where [...] to class Num a where [...] The first 2 attached patches (for base and ghc respectively) remove the Show constraint. I'm not aware of any justification for why this superclass makes sense. The next 2 patches (for base and unix respectively) remove the Eq constraint. Here's there's some justification in the superclass, as it makes f 5 = ... work for any Num type, rather than also needing an Eq constraint, but personally I would be in favour of removing this superclass too. Noteworthy is that Bits now needs an Eq superclass for the default methods for 'testBit' and 'popCount'. The fifth patch (for base) is what prompted me to get around to sending this proposal. It lets us de-orphan the Show Integer instance. Any opinions? Suggested discussion deadline: 12th October Thanks Ian

On 16 September 2011 08:58, Ian Lynagh
Hi all,
I would like to propose that we remove the Show and Eq superclasses from Num, i.e. change class (Eq a, Show a) => Num a where [...] to class Num a where [...]
+1 -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

+1 from me.
The primary concern is of course that this means loss of compatibility with the current Haskell Report.
foo :: Num a => a -> String
foo = show
would cease to type check.
-Edward
Sent from my iPad
On Sep 15, 2011, at 6:58 PM, Ian Lynagh
Hi all,
I would like to propose that we remove the Show and Eq superclasses from Num, i.e. change class (Eq a, Show a) => Num a where [...] to class Num a where [...]
The first 2 attached patches (for base and ghc respectively) remove the Show constraint. I'm not aware of any justification for why this superclass makes sense.
The next 2 patches (for base and unix respectively) remove the Eq constraint. Here's there's some justification in the superclass, as it makes f 5 = ... work for any Num type, rather than also needing an Eq constraint, but personally I would be in favour of removing this superclass too. Noteworthy is that Bits now needs an Eq superclass for the default methods for 'testBit' and 'popCount'.
The fifth patch (for base) is what prompted me to get around to sending this proposal. It lets us de-orphan the Show Integer instance.
Any opinions?
Suggested discussion deadline: 12th October
Thanks Ian
<0003-Remove-the-Show-superclass-of-Num.patch> <0001-Follow-the-removal-of-the-Show-superclass-of-Num.patch> <0004-Remove-the-Eq-superclass-of-Num.patch> <0001-Follow-the-removal-of-the-Eq-superclass-of-Num.patch> <0005-De-orphan-the-Show-Integer-instance.patch> _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

Ian Lynagh writes:
I would like to propose that we remove the Show and Eq superclasses from Num, i.e. change class (Eq a, Show a) => Num a where [...] to class Num a where [...]
This will break client code, but will not fix other defects of Num, namely the inclusion of abs/signum, and tying (*) to (+). I think the right approach is to refactor the numeric classes along algebraic lines: http://hackage.haskell.org/package/yap Leave Num with its broken interface for backward compatibility (for clients), but provide a clean Ring superclass for new code.

On Fri, Sep 16, 2011 at 12:47:18AM +0100, Paterson, Ross wrote:
Ian Lynagh writes:
I would like to propose that we remove the Show and Eq superclasses from Num, i.e. change class (Eq a, Show a) => Num a where [...] to class Num a where [...]
This will break client code, but will not fix other defects of Num,
It doesn't solve everything, but I hope we can agree it is an incremental step in the right direction. I don't think a revolutionary change fixing all the issues is feasible. This particular blemish was already being described as "largely historical" more than a decade ago: http://www.haskell.org/pipermail/haskell/2000-October/006147.html Thanks Ian

On Sep 15, 2011, at 9:41 PM, Ian Lynagh wrote:
It doesn't solve everything, but I hope we can agree it is an incremental step in the right direction. I don't think a revolutionary change fixing all the issues is feasible. This particular blemish was already being described as "largely historical" more than a decade ago: http://www.haskell.org/pipermail/haskell/2000-October/006147.html
+1 This is relatively painless and makes things much better. We should do it sooner rather than later. --S

+1
On Sep 16, 2011 10:42 AM, "Ian Lynagh"
On Fri, Sep 16, 2011 at 12:47:18AM +0100, Paterson, Ross wrote:
Ian Lynagh writes:
I would like to propose that we remove the Show and Eq superclasses from Num, i.e. change class (Eq a, Show a) => Num a where [...] to class Num a where [...]
This will break client code, but will not fix other defects of Num,
It doesn't solve everything, but I hope we can agree it is an incremental step in the right direction. I don't think a revolutionary change fixing all the issues is feasible. This particular blemish was already being described as "largely historical" more than a decade ago: http://www.haskell.org/pipermail/haskell/2000-October/006147.html
Thanks Ian
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Fri, 2011-09-16 at 02:41 +0100, Ian Lynagh wrote:
I would like to propose that we remove the Show and Eq superclasses from Num, i.e. change class (Eq a, Show a) => Num a where [...] to class Num a where [...]
This will break client code, but will not fix other defects of Num,
It doesn't solve everything, but I hope we can agree it is an incremental step in the right direction. I don't think a revolutionary change fixing all the issues is feasible. This particular blemish was already being described as "largely historical" more than a decade ago: http://www.haskell.org/pipermail/haskell/2000-October/006147.html
+1 Just as a side note: I also dislike that the Data.Bits.Bits type-class has Num as its superclass; If I need something to be an instance of the Bits class for the bit-ops, I don't usually want to be forced to provide multiplication and addition operations as well...

On Fri, 16 Sep 2011, Herbert Valerio Riedel wrote:
Just as a side note: I also dislike that the Data.Bits.Bits type-class has Num as its superclass; If I need something to be an instance of the Bits class for the bit-ops, I don't usually want to be forced to provide multiplication and addition operations as well...
Me too. For instance when working with flag sets like in [1], addition, multiplication, absolute value, number literals make no sense. [1] http://hackage.haskell.org/package/enumset

I also really dislike that superclass of bits, there is no need for it.
John
On Fri, Sep 16, 2011 at 5:12 AM, Henning Thielemann
On Fri, 16 Sep 2011, Herbert Valerio Riedel wrote:
Just as a side note: I also dislike that the Data.Bits.Bits type-class has Num as its superclass; If I need something to be an instance of the Bits class for the bit-ops, I don't usually want to be forced to provide multiplication and addition operations as well...
Me too. For instance when working with flag sets like in [1], addition, multiplication, absolute value, number literals make no sense.
[1] http://hackage.haskell.org/package/enumset
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

That and it prevents the obvious instance for Bool.
On Fri, Sep 16, 2011 at 8:51 PM, John Meacham
I also really dislike that superclass of bits, there is no need for it.
John
On Fri, Sep 16, 2011 at 5:12 AM, Henning Thielemann
wrote: On Fri, 16 Sep 2011, Herbert Valerio Riedel wrote:
Just as a side note: I also dislike that the Data.Bits.Bits type-class has Num as its superclass; If I need something to be an instance of the Bits class for the bit-ops, I don't usually want to be forced to provide multiplication and addition operations as well...
Me too. For instance when working with flag sets like in [1], addition, multiplication, absolute value, number literals make no sense.
[1] http://hackage.haskell.org/package/enumset
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

I found this proposal interesting. Indeed, as far as I can remember, almost every instance I did in the Num class forced me to do a couple of instances more (Eq and Show) that I have never thought to do. Sometimes, even there was no way to do them (in a reasonable way). Well, this is just another opinion. Sincerely, Daniel Díaz.

Ian Lynagh writes:
It doesn't solve everything, but I hope we can agree it is an incremental step in the right direction. I don't think a revolutionary change fixing all the issues is feasible. This particular blemish was already being described as "largely historical" more than a decade ago: http://www.haskell.org/pipermail/haskell/2000-October/006147.html
All these blemishes have been described as historical for a long time. But I'm suggesting that we address them by going in a different direction. Under this proposal the 7 numeric classes lose the Show superclass, and Num, Fractional and Floating lose the Eq superclass. That breaks compatibility with Haskell 98, and will break functions that have numeric classes in their signatures and use equality or show in their implementations (e.g. the patches need to change the signatures of 27 functions in ghc and the core libs). If we're considering that, we ought also to consider alternative trade-offs between interface improvement and breakage. The alternative approach is to refactor the numeric classes internally (as in the yap package). That will break packages too, probably more, but that will leave us with much more sensible and useful classes. And the breakage falls on different users: - If we remove superclasses of Num, it falls on (some) people who put numeric classes in their type signatures. - If we refactor the numeric classes internally, it falls on those who define instances of numeric classes. (I would argue that many of those instances are kludged today because of the flawed classes.) Those who merely use the numeric classes in type signatures or call their methods should be unaffected.

On Sat, Sep 17, 2011 at 8:37 AM, Paterson, Ross
Those who merely use the numeric classes in type signatures or call their methods should be unaffected.
Then Eq and Show must be superclasses of Num forever, which is the wart Ian wants to address. I think that these superclasses should be removed *and* the classes should be restructured along the lines of yap. I don't care in which order, so +1 from me. Sebastian

Sebastian Fischer writes:
Then Eq and Show must be superclasses of Num forever, which is the wart Ian wants to address. I think that these superclasses should be removed *and* the classes should be restructured along the lines of yap. I don't care in which order, so +1 from me.
If we had a Ring class, the superclasses of Num would be of much less consequence. We could do both, but they each have significant costs in breakage, which would need to be justified. If we're considering breaking changes to the basic Haskell 98 classes, we should examine the range of options and work out which gives the best value in terms of benefits over breakage.

On Sat, Sep 17, 2011 at 7:33 PM, Paterson, Ross
Sebastian Fischer writes:
Then Eq and Show must be superclasses of Num forever, which is the wart Ian wants to address. I think that these superclasses should be removed *and* the classes should be restructured along the lines of yap. I don't care in which order, so +1 from me.
If we had a Ring class, the superclasses of Num would be of much less consequence.
The problem that comes in is that this at the next proposal someone will come along and just say the same thing. "Now you have rings, but we really need semirings", so we can make a Bool instance that folks understand rather than the ugly boolean ring. "Now you have semirings, but we really need right seminearrings" to model recognizing parsers. Moreover, to properly factor out abs and signum you need MPTCs and, regardless, any instance of Ring for Float and Double is lying to you, due to the fact that associativity just flat out doesn't hold, so I would have reservations about introducing it as a superclass for Num. At least in its current lawless state I don't feel guilty making instances for IEEE floats. I favor this proposal because it is simple, has easily understood consequences, and doesn't require people who implement Num to do anything different, merely requires a few extra annotations in places where people were using Num but also wanted to be able to show or compare, while letting far more Num instances be defined without lying. -Edward

Edward Kmett writes:
The problem that comes in is that this at the next proposal someone will come along and just say the same thing.
Not quite the same thing. I'm saying that Num is broken (which isn't a novel claim).
"Now you have rings, but we really need semirings", so we can make a Bool instance that folks understand rather than the ugly boolean ring.
"Now you have semirings, but we really need right seminearrings" to model recognizing parsers.
That's an argument for never refactoring, and indeed for never defining classes in the first place.
Moreover, to properly factor out abs and signum you need MPTCs and, regardless, any instance of Ring for Float and Double is lying to you, due to the fact that associativity just flat out doesn't hold, so I would have reservations about introducing it as a superclass for Num.
It's true that vector spaces, modules, etc are more difficult, but we don't need to fix everything to get the benefits of a more sensible structure. As for the lawlessness of Float and Double, you could say the same of the Eq instance for those types, and several Monad instances. It's a pity, but it's peripheral.
I favor this proposal because it is simple, has easily understood consequences, and doesn't require people who implement Num to do anything different, merely requires a few extra annotations in places where people were using Num but also wanted to be able to show or compare, while letting far more Num instances be defined without lying.
But it's the instances of Num that are broken, usually lying about abs/signum too, and need to be changed, not the uses.

The good thing about the incremental step of removing the superclasses
as an intermediate step is that it lets you write forwards and
backwards compatible code. just include Eq a,Num a in your type
signatures, the haskell 98 compiler won't complain and you can still
be compatibile with the new classes.
John
On Fri, Sep 16, 2011 at 4:37 PM, Paterson, Ross
Ian Lynagh writes:
It doesn't solve everything, but I hope we can agree it is an incremental step in the right direction. I don't think a revolutionary change fixing all the issues is feasible. This particular blemish was already being described as "largely historical" more than a decade ago: http://www.haskell.org/pipermail/haskell/2000-October/006147.html
All these blemishes have been described as historical for a long time. But I'm suggesting that we address them by going in a different direction.
Under this proposal the 7 numeric classes lose the Show superclass, and Num, Fractional and Floating lose the Eq superclass. That breaks compatibility with Haskell 98, and will break functions that have numeric classes in their signatures and use equality or show in their implementations (e.g. the patches need to change the signatures of 27 functions in ghc and the core libs). If we're considering that, we ought also to consider alternative trade-offs between interface improvement and breakage.
The alternative approach is to refactor the numeric classes internally (as in the yap package). That will break packages too, probably more, but that will leave us with much more sensible and useful classes. And the breakage falls on different users: - If we remove superclasses of Num, it falls on (some) people who put numeric classes in their type signatures. - If we refactor the numeric classes internally, it falls on those who define instances of numeric classes. (I would argue that many of those instances are kludged today because of the flawed classes.) Those who merely use the numeric classes in type signatures or call their methods should be unaffected.
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On September 15, 2011 19:47:18 Paterson, Ross wrote:
This will break client code, but will not fix other defects of Num, namely the inclusion of abs/signum, and tying (*) to (+). I think the right approach is to refactor the numeric classes along algebraic lines:
http://hackage.haskell.org/package/yap
Leave Num with its broken interface for backward compatibility (for clients), but provide a clean Ring superclass for new code.
I was looking through YAP a bit and was not certain about what was there only for compatibility as apposed to because it was also still good. Am I correct in understanding, if you were starting from scratch: - Num, Fractional, Real, and Integral would get axed, and - EuclidianDomain, Ring, and Field would form the base. Would you then leave RealFrac, Floating, and RealFloat with superclasses - RealFrac: Field - Floating: Field - RealFloat: RealFrac, Floating or overhaul them too somehow? The choice of functions for Floating has always seemed somewhat arbitrary. Not so much that I'm saying the the choice of basic transcendentals was bad, but the fact a choice had to be made seems bad. Perhaps there should be classes for groups of related functions instead? Thanks! -Tyson

Tyson Whitehead writes:
I was looking through YAP a bit and was not certain about what was there only for compatibility as apposed to because it was also still good.
Am I correct in understanding, if you were starting from scratch:
- Num, Fractional, Real, and Integral would get axed, and - EuclidianDomain, Ring, and Field would form the base.
Fractional would be superfluous, but people would probably still want to overload abs, signum, toRational, quot, rem and toInteger for the standard numeric types. Maybe the classes would have different names.
Would you then leave RealFrac, Floating, and RealFloat with superclasses
- RealFrac: Field - Floating: Field - RealFloat: RealFrac, Floating
or overhaul them too somehow? The choice of functions for Floating has always seemed somewhat arbitrary. Not so much that I'm saying the the choice of basic transcendentals was bad, but the fact a choice had to be made seems bad.
I don't know how those classes should be changed. Certainly the problems aren't as glaring.

On October 13, 2011 18:47:48 Paterson, Ross wrote:
Tyson Whitehead writes:
I was looking through YAP a bit and was not certain about what was there only for compatibility as apposed to because it was also still good.
Am I correct in understanding, if you were starting from scratch:
- Num, Fractional, Real, and Integral would get axed, and - EuclidianDomain, Ring, and Field would form the base.
Fractional would be superfluous, but people would probably still want to overload abs, signum, toRational, quot, rem and toInteger for the standard numeric types. Maybe the classes would have different names.
Thanks for getting back to me on this Ross. I was also wondering about putting fromInteger and fromRational in Ring and Field. I would have imagined that not all rings and fields would have reasonable injections from the set of integers and rational numbers. Is this not the case? Thanks! -Tyson

On Mon, Oct 17, 2011 at 8:05 PM, Tyson Whitehead
On October 13, 2011 18:47:48 Paterson, Ross wrote:
Fractional would be superfluous, but people would probably still want to overload abs, signum, toRational, quot, rem and toInteger for the standard numeric types. Maybe the classes would have different names.
Thanks for getting back to me on this Ross. I was also wondering about putting fromInteger and fromRational in Ring and Field.
I would have imagined that not all rings and fields would have reasonable injections from the set of integers and rational numbers.
Is this not the case?
Thanks! -Tyson
Rings with unity have a canonical map, actually a ring homomorphism (but not necessarily injection) from the integers, namely for the natural integer N, you add together the unit element with itself N times. For negative N, you take the additive inverse. For fields, you would try to extend this to rationals; however, it seems that because of the non-injectivity of the above, this won't always work. Example: finite fields. In a finite field of order P, we would have f(N/P) = f(N)/f(P) = f(N)/0 which is not defined. So, unless I made some stupid mistake, fromInteger in Ring is OK, but fromRational in Field is not. However it probably would make sense to separate rings with unity from rings without unity, and put fromInteger in the former class. Balazs

Balazs Komuves writes:
Rings with unity have a canonical map, actually a ring homomorphism (but not necessarily injection) from the integers, namely for the natural integer N, you add together the unit element with itself N times. For negative N, you take the additive inverse.
For fields, you would try to extend this to rationals; however, it seems that because of the non-injectivity of the above, this won't always work. Example: finite fields. In a finite field of order P, we would have f(N/P) = f(N)/f(P) = f(N)/0 which is not defined.
Good point. Mind you we already have this with Ratio Int and friends.

On October 17, 2011 19:19:18 Paterson, Ross wrote:
Balazs Komuves writes:
Rings with unity have a canonical map, actually a ring homomorphism (but not necessarily injection) from the integers, namely for the natural integer N, you add together the unit element with itself N times. For negative N, you take the additive inverse.
For fields, you would try to extend this to rationals; however, it seems that because of the non-injectivity of the above, this won't always work. Example: finite fields. In a finite field of order P, we would have f(N/P) = f(N)/f(P) = f(N)/0 which is not defined.
Good point. Mind you we already have this with Ratio Int and friends.
Yes. It doesn't really strike me as such an issue. Really just another statement that "recip zero" is not defined. Cheers! -Tyson PS: Thanks for the info on the canonical map Balazs. Very nice.

On October 17, 2011 19:19:18 Paterson, Ross wrote:
Balazs Komuves writes:
Rings with unity have a canonical map, actually a ring homomorphism (but not necessarily injection) from the integers, namely for the natural integer N, you add together the unit element with itself N times. For negative N, you take the additive inverse.
For fields, you would try to extend this to rationals; however, it seems that because of the non-injectivity of the above, this won't always work. Example: finite fields. In a finite field of order P, we would have f(N/P) = f(N)/f(P) = f(N)/0 which is not defined.
Good point. Mind you we already have this with Ratio Int and friends.
I was also wondering about the EuclidanDomain "associate" and "unit" functions. YAP requires, for all x x = associated x * unit x (where unit x has an multiplicative inverse) associate (unit x) = unit (associate x) = 1 http://hackage.haskell.org/packages/archive/yap/0.1/doc/html/Data-YAP- Algebra.html#t:EuclideanDomain This seem very much like generalized absolute and signum functions. The wikipedia page on Euclidean Domains, however, doesn't mention such functions. Rather it works with the concept of a "euclidian" function which gives a mapping onto the non-negative integers such that euclidean (mod a b) < euclidean b whenever mod a b /= 0 http://en.wikipedia.org/wiki/Euclidean_domain#Definition Are these two viewpoints connected? Does a Euclidian domain's euclidian function somehow give rise to the "associate" and "unit" functions? Thanks! -Tyson

Tyson Whitehead writes:
I was also wondering about the EuclidanDomain "associate" and "unit" functions. YAP requires, for all x
x = associate x * unit x (where unit x has an multiplicative inverse) associate (unit x) = unit (associate x) = 1
http://hackage.haskell.org/packages/archive/yap/0.1/doc/html/Data-YAP- Algebra.html#t:EuclideanDomain
This seem very much like generalized absolute and signum functions.
The two concepts are similar for Int and Integer, except that signum 0 = 0 while unit 0 = 1, but the meanings of abs and signum on other types generalize in a different way. These functions are pretty common in computer algebra systems.
The wikipedia page on Euclidean Domains, however, doesn't mention such functions. Rather it works with the concept of a "euclidian" function which gives a mapping onto the non-negative integers such that
euclidean (mod a b) < euclidean b whenever mod a b /= 0
http://en.wikipedia.org/wiki/Euclidean_domain#Definition
Are these two viewpoints connected? Does a Euclidian domain's euclidian function somehow give rise to the "associate" and "unit" functions?
The purpose of the EuclideanDomain class is to allow us to use Euclid's algorithm, defining gcd :: EuclideanDomain a => a -> a -> a and with that instance EuclideanDomain a => Field (Ratio a) The Euclidean function is the measure function into a well-ordered set (the naturals) used in the proof that gcd terminates. You don't actually use it in the code. On the other hand you do need to be able to get a canonical associate.

On November 2, 2011 18:00:05 Paterson, Ross wrote:
The two concepts are similar for Int and Integer, except that signum 0 = 0 while unit 0 = 1, but the meanings of abs and signum on other types generalize in a different way.
These functions are pretty common in computer algebra systems.
<snip>
The Euclidean function is the measure function into a well-ordered set (the naturals) used in the proof that gcd terminates. You don't actually use it in the code. On the other hand you do need to be able to get a canonical associate.
Am I correct in understanding then that there could actually be euclidean domains that don't have good definitions unit and associate? Thanks! -Tyson

Tyson Whitehead writes:
Am I correct in understanding then that there could actually be euclidean domains that don't have good definitions unit and associate?
The properties make sense for any integral domain; there can always be a definition. Of course there may be some integral domains for which the operations are not computable, just as other operations might not be.

One caveat with the names involved here is that since we're working in a
constructive setting, we only have access to the non-Noetherian analogues
to these ideas.
We can only talk about finitely generated ideals, etc. so the proper names
drift into slightly more exotic territory, unique factorization domains
become rather redundantly named as GCD domains, constructive principal
ideal domains are Bézout domains, Dedekind domains weaken to
Prüfer domains, etc.
It becomes annoyingly easy to trip up and mention something that was
designed in a classical setting, and some of the constructive analogues you
need lack traditional names.
-Edward
On Wed, Nov 2, 2011 at 6:56 PM, Paterson, Ross
Tyson Whitehead writes:
Am I correct in understanding then that there could actually be euclidean domains that don't have good definitions unit and associate?
The properties make sense for any integral domain; there can always be a definition. Of course there may be some integral domains for which the operations are not computable, just as other operations might not be.
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On November 2, 2011 18:56:41 Paterson, Ross wrote:
Tyson Whitehead writes:
Am I correct in understanding then that there could actually be euclidean domains that don't have good definitions unit and associate?
The properties make sense for any integral domain; there can always be a definition. Of course there may be some integral domains for which the operations are not computable, just as other operations might not be.
That's okay. I wasn't so interested in whether it was computable or not. I was just trying to get a feel for the nature of the structure. I see an integral domain is just a commutative ring with no zero divisors (and every euclidean domain is also an integral domain) http://en.wikipedia.org/wiki/Integral_domain If I'm understanding you then this is sufficient structure to tell us that an associate and unit decomposition exists, even if we can't compute it. I spent sometime last night trying to figure out what about this structure guarantees such a decomposition. I didn't have much luck though. Any hints? Thanks! -Tyson

Tyson Whitehead writes:
I see an integral domain is just a commutative ring with no zero divisors (and every euclidean domain is also an integral domain)
If I'm understanding you then this is sufficient structure to tell us that an associate and unit decomposition exists, even if we can't compute it.
I spent sometime last night trying to figure out what about this structure guarantees such a decomposition. I didn't have much luck though. Any hints?
Units are invertible elements, and two elements are associates if they're factors of each other. So association is an equivalence relation; in particular the associates of 1 are the units, and the only associate of 0 is itself. Now choose a member from each association equivalence class to be the canonical associate for all the members of that class, choosing 1 as the canonical associate for the unit class. Because there are no zero divisors, that uniquely determines the canonical unit for each element.

On Thu, Nov 3, 2011 at 4:11 PM, Paterson, Ross
Tyson Whitehead writes:
I see an integral domain is just a commutative ring with no zero divisors (and every euclidean domain is also an integral domain)
If I'm understanding you then this is sufficient structure to tell us that an associate and unit decomposition exists, even if we can't compute it.
I spent sometime last night trying to figure out what about this structure guarantees such a decomposition. I didn't have much luck though. Any hints?
Units are invertible elements, and two elements are associates if they're factors of each other. So association is an equivalence relation; in particular the associates of 1 are the units, and the only associate of 0 is itself.
Now choose a member from each association equivalence class to be the canonical associate for all the members of that class, choosing 1 as the canonical associate for the unit class. Because there are no zero divisors, that uniquely determines the canonical unit for each element.
Huh, that seems a bit arbitrary, but I can see how it would be useful. The documentation and/or laws should probably be altered to prevent the definitions 'unit = const 1' and 'associate = id' :)

Ben Millwood writes:
The documentation and/or laws should probably be altered to prevent the definitions 'unit = const 1' and 'associate = id' :)
Good point. It should say that if x and y are factors of each other then associate x == associate y.

On Thu, Nov 3, 2011 at 5:11 PM, Paterson, Ross
Units are invertible elements, and two elements are associates if they're factors of each other. So association is an equivalence relation; in particular the associates of 1 are the units, and the only associate of 0 is itself.
Now choose a member from each association equivalence class to be the canonical associate for all the members of that class, choosing 1 as the canonical associate for the unit class. Because there are no zero divisors, that uniquely determines the canonical unit for each element.
It seems to me that a typical Euclidean domain does not have any kind of meaningful canonical associate / unit map. Examples: - The Gaussian integers Z[i] (units are 1,-1,i,-i; what would be the associated element of 5+7i ?) - Formal power series K[[x]] over a field (units are every series with nonzero constant coefficients), - and probably just about any other interesting structure satisfying the definition. A function "a -> a" in a type class suggests to me a canonical mapping. Thus, I would advocate against putting associate/unit into such a Euclidean domain type class. (Independently of this, I also find the name "unit" a bit confusing for something which would be better called "an associated unit"; "unit" is already a very overloaded word) Balazs

On Thursday 03 November 2011, 20:01:25, Balazs Komuves wrote:
It seems to me that a typical Euclidean domain does not have any kind of meaningful canonical associate / unit map.
I agree.
Examples:
- The Gaussian integers Z[i] (units are 1,-1,i,-i; what would be the associated element of 5+7i ?)
- Formal power series K[[x]] over a field (units are every series with nonzero constant coefficients),
This one has a fairly canonical representative for the classes of associated series: X^n, where n is the index of the first nonzero coefficient.
- and probably just about any other interesting structure satisfying the definition.
A function "a -> a" in a type class suggests to me a canonical mapping. Thus, I would advocate against putting associate/unit into such a Euclidean domain type class.
True, but I think we'd need such functions to have well-defined "canonical" factorisations for example.
(Independently of this, I also find the name "unit" a bit confusing for something which would be better called "an associated unit";
Except here, where 'associated' means 'equal up to multiplication with a unit'.
"unit" is already a very overloaded word)
Yes.

On Thu, Nov 3, 2011 at 9:14 PM, Daniel Fischer < daniel.is.fischer@googlemail.com> wrote:
Examples:
- The Gaussian integers Z[i] (units are 1,-1,i,-i; what would be the associated element of 5+7i ?)
- Formal power series K[[x]] over a field (units are every series with nonzero constant coefficients),
This one has a fairly canonical representative for the classes of associated series: X^n, where n is the index of the first nonzero coefficient.
While this may seem "canonical" to a human eye, I would argue that it is rather ad-hoc. (for example a change of variable x -> (x+1) will change the notion of "canonical"?)
- and probably just about any other interesting structure satisfying the definition.
A function "a -> a" in a type class suggests to me a canonical mapping. Thus, I would advocate against putting associate/unit into such a Euclidean domain type class.
True, but I think we'd need such functions to have well-defined "canonical" factorisations for example.
But if there is no canonical factorization, why do we want to force it? A normalized factorization in an algebra system is something between a design choice and an implementation detail, from my viewpoint.
(Independently of this, I also find the name "unit" a bit confusing for something which would be better called "an associated unit";
Except here, where 'associated' means 'equal up to multiplication with a unit'.
Actually, this seems to be consistent to me :) (since two "associated units" differ by a multiplication with a unit) Balazs

Balazs Komuves writes:
It seems to me that a typical Euclidean domain does not have any kind of meaningful canonical associate / unit map. Examples:
- The Gaussian integers Z[i] (units are 1,-1,i,-i; what would be the associated element of 5+7i ?)
Prelude Data.Algebra Data.Complex> associate (5 :+ 7) :: Complex Integer 5 :+ 7 Prelude Data.Algebra Data.Complex> unit (5 :+ 7) :: Complex Integer 1 :+ 0 Prelude Data.Algebra Data.Complex> associate (5 :+ (-7)) :: Complex Integer 7 :+ 5 Prelude Data.Algebra Data.Complex> unit (5 :+ (-7)) :: Complex Integer 0 :+ (-1) Doing it by quadrants seems a reasonable choice. It's arbitrary to some extent, but then so are div and mod on Gaussian integers. But when we write a function (e.g. gcd), we have to provide _an_ answer, even it's not distinguished in some obvious way.

On November 3, 2011 15:01:25 Balazs Komuves wrote:
- The Gaussian integers Z[i] (units are 1,-1,i,-i; what would be the associated element of 5+7i ?)
Wouldn't the associated elements be just (5+7i) * { 1, -1, i, and -i } (i.e., the set generated by multiplying an element by each element of the units)? Cheers! -Tyson

On November 3, 2011 12:11:19 Paterson, Ross wrote:
Tyson Whitehead writes:
I see an integral domain is just a commutative ring with no zero divisors (and every euclidean domain is also an integral domain)
http://en.wikipedia.org/wiki/Integral_domain
If I'm understanding you then this is sufficient structure to tell us that an associate and unit decomposition exists, even if we can't compute it.
I spent sometime last night trying to figure out what about this structure guarantees such a decomposition. I didn't have much luck though. Any hints?
Units are invertible elements, and two elements are associates if they're factors of each other. So association is an equivalence relation; in particular the associates of 1 are the units, and the only associate of 0 is itself.
Now choose a member from each association equivalence class to be the canonical associate for all the members of that class, choosing 1 as the canonical associate for the unit class. Because there are no zero divisors, that uniquely determines the canonical unit for each element.
To see if I've got this correct. 1 - All invertable elements are associates of 1 - for all x and x' inverses, x * 1 = x, x' * 1 = x', and x * x' = 1, so - 1, x, and x' are associates (x * x'^2 = x' and x^2 * x' = x) 2 - The only associate of 0 is 0 - an associate of 0 would require 0 to be a factor, so - the lack of zero divisors means only 0 itself statisfies this requirement 3 Fixing cannonical associates uniquely determines the cannonical units - for all x /= 0 we have a cannonical associate x# /= 0, this means - x and x# are factors of each other so there are unique a and b such that - x = x# * a and x# = x * b, which in turn gives - x = x# * a = x * b * a, which means - b * a = 1, which means a and b are units (invertable), so - x = x# * a satisfies the decomposition (and x# determines a) It would, therefore, seem to me that the key observation this encoding is expressing is that two elements being being factors of each other implies they are nessesarily related by an invertable multiple. I guess then that the set of associates is also completely characterized as any of its elements multiplied by all unit (invertable) elements. Thanks very much for the clarification. Cheers! -Tyson

Tyson Whitehead writes:
It would, therefore, seem to me that the key observation this encoding is expressing is that two elements being being factors of each other implies they are nessesarily related by an invertable multiple.
I guess then that the set of associates is also completely characterized as any of its elements multiplied by all unit (invertable) elements.
Yes. It's the arbitrariness in the selection of the representative that some object to. (But we do need a representative.)

The first 2 attached patches (for base and ghc respectively) remove the Show constraint. I'm not aware of any justification for why this superclass makes sense.
I'm strongly in favour of removing the Show constraint.
The next 2 patches (for base and unix respectively) remove the Eq constraint. Here's there's some justification in the superclass, as it makes f 5 = ... work for any Num type, rather than also needing an Eq constraint,
I am undecided whether removing the Eq constraint is a good thing or not. In principle, yes, it makes perfect sense. But in practice, the example you give of pattern-matching a literal now requiring an Eq context, smells slightly wrong to me. Pattern-matching on any other kind of constructor does not require an Eq context: it feels like some of the magic of literal numbers (I know they aren't really literals, they just appear that way) is wearing off and the underlying reality (where they are not constructors at all) is becoming visible. Regards, Malcolm

Malcolm Wallace
The first 2 attached patches (for base and ghc respectively) remove the Show constraint. I'm not aware of any justification for why this superclass makes sense.
I'm strongly in favour of removing the Show constraint.
The next 2 patches (for base and unix respectively) remove the Eq constraint. Here's there's some justification in the superclass, as it makes f 5 = ... work for any Num type, rather than also needing an Eq constraint,
I am undecided whether removing the Eq constraint is a good thing or not. In principle, yes, it makes perfect sense. But in practice, the example you give of pattern-matching a literal now requiring an Eq context, smells slightly wrong to me.
But pattern matching for Double or Float is a Bad Thing, so wouldn’t the solution to this be to put the EQ constraint somewhere else, such as Integral where it would be less improper? -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk

+1 on removing the Show constraint. I abstain on the Eq constraint - either way is fine with me. Jon Fairbairn wrote:
But pattern matching for Double or Float is a Bad Thing, so wouldn’t the solution to this be to put the EQ constraint somewhere else, such as Integral where it would be less improper?
The Badness comes from the Eq instance itself. Once that instance exists, using it for pattern matching isn't any worse than equating with a Num literal in an expression. If we are going to have an Eq constraint at all, it might as well be on Num. Thanks, Yitz

On Sun, Sep 18, 2011 at 5:19 AM, Yitzchak Gale
+1 on removing the Show constraint. I abstain on the Eq constraint - either way is fine with me.
Jon Fairbairn wrote:
But pattern matching for Double or Float is a Bad Thing, so wouldn’t the solution to this be to put the EQ constraint somewhere else, such as Integral where it would be less improper?
The Badness comes from the Eq instance itself. Once that instance exists, using it for pattern matching isn't any worse than equating with a Num literal in an expression. If we are going to have an Eq constraint at all, it might as well be on Num.
Well, in the case of Integral values, I know I can convert them to an Integer and so I have a meaningful notion of equality that comes built in and Real imparts an Ord instance, both via the current instance and inherently due to the ability to convert to a Rational so if you require too much more than Num you'd get Eq. Why burden Num with it where it locks out useful instances or makes users lie to get them? -Edward

Yitzchak Gale
+1 on removing the Show constraint. I abstain on the Eq constraint - either way is fine with me.
Jon Fairbairn wrote:
But pattern matching for Double or Float is a Bad Thing, so wouldn’t the solution to this be to put the EQ constraint somewhere else, such as Integral where it would be less improper?
The Badness comes from the Eq instance itself. Once that instance exists, using it for pattern matching isn't any worse than equating with a Num literal in an expression. If we are going to have an Eq constraint at all, it might as well be on Num.
I don’t see the logic of that. The problem is that having an Eq constraint on Num forces people to give Eq instances for types that do not naturally have them, such as Real types. Certainly there’s little difference between using (==) and pattern matching, but the issue was that removing the Eq constraint from Num would make programmes that test equality (either way) on integers require the programmer to add “Eq t => …” somewhere in a programme that previously didn’t require it. Moving the Eq constraint to Integral would mean that programmers declaring instances would only be required to add Eq when making something an instance of Integral, a class that one would naturally expect to have equality, while programmers simply using Integer or Int &c would need to make no change (and programmers using pattern matching on Double would get the error they deserve). -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk

On Sun, Sep 18, 2011 at 4:54 AM, Jon Fairbairn
But pattern matching for Double or Float is a Bad Thing, so wouldn’t the solution to this be to put the EQ constraint somewhere else, such as Integral where it would be less improper?
Great call, since you are already ensured you can convert to Integer -Edward

On 15/09/2011 23:58, Ian Lynagh wrote:
The next 2 patches (for base and unix respectively) remove the Eq constraint. Here's there's some justification in the superclass, as it makes f 5 = ... work for any Num type, rather than also needing an Eq constraint, but personally I would be in favour of removing this superclass too. Noteworthy is that Bits now needs an Eq superclass for the default methods for 'testBit' and 'popCount'.
I have a vague recollection that there are some generic implementations of sqrt that rely on equality tests with 0. I can't spot any in the Haskell library source, though. Otherwise +1; one significant benefit of this change would be for EDSLs which currently have to have broken Eq instances to implement Num. Cheers, Ganesh

On Fri, Sep 16, 2011 at 12:58 AM, Ian Lynagh
Hi all,
I would like to propose that we remove the Show and Eq superclasses from Num, i.e. change class (Eq a, Show a) => Num a where [...] to class Num a where [...]
The first 2 attached patches (for base and ghc respectively) remove the Show constraint. I'm not aware of any justification for why this superclass makes sense.
The next 2 patches (for base and unix respectively) remove the Eq constraint. Here's there's some justification in the superclass, as it makes f 5 = ... work for any Num type, rather than also needing an Eq constraint, but personally I would be in favour of removing this superclass too. Noteworthy is that Bits now needs an Eq superclass for the default methods for 'testBit' and 'popCount'.
The fifth patch (for base) is what prompted me to get around to sending this proposal. It lets us de-orphan the Show Integer instance.
Any opinions?
Suggested discussion deadline: 12th October
Thanks Ian
As a thought experiment, with the ConstraintKinds extension coming up, what would it take to make this change fully backwards compatible? With ConstraintKinds, we could write: type Num a = (Show a, Eq a, Num a) -- ??? ...okay, maybe not. We could define separate Nums in separate modules, though: module OldNum where import qualified NewNum type Num a = (Show a, Eq a, NewNum.Num a) and have the Num exported from the Prelude be the one from OldNum. That way, code relying on a Num context also implying Show and Eq doesn't break. But instance declarations break instead, which is probably worse. If we also have http://hackage.haskell.org/trac/ghc/wiki/DefaultSuperclassInstances more specifically, the "Multi-headed instance declarations" part from the end, then writing instance OldNum.Num Foo where (+) = ... ... etc. ... would distribute the method definitions for OldNum.Num to Eq, Show, and NewNum.Num. That gets us tantalizingly close. The problem is that (a) instances for Num in existing code won't include definitions for Eq and Show, and (b) separate instances for Eq and Show will be in scope. The naive implementation of the feature will just emit instances for Eq and Show with undefined method bodies, resulting forthwith in a duplicate instance conflict. One straightforward solution would be to refrain from emitting an instance if (a) an instance for that class (for the given type) is already in scope, and (b) the instance declaration for the constraint-tuple doesn't include any definitions pertaining to that class. Perhaps accompanied by a warning. I'm not sure if that's palatable, though. It would mainly be problematic with classes for which it's reasonable to declare instances with an empty body (maybe because it's using the new Generics feature, for example): the user might be expecting to declare a new instance and be surprised to find that an existing one is being used instead, especially if the 'existing one' is introduced later, somewhere higher up the import hierarchy. Warnings would at least let her know it's happening, though. Is there any other way to do it? In any case, the question is whether a solution allowing full backwards compatibility is possible and whether it has any likelihood of being implemented in GHC in the near future. (Half of it is already there, with ConstraintKinds.) If it's straight-up impossible or is unlikely to be implemented, I see no reason not to +1 this proposal. Otherwise, it might make sense to wait. If it's been more than a decade, after all, what's another year? -- Work is punishment for failing to procrastinate effectively.

On Thu, Sep 15, 2011 at 11:58:21PM +0100, Ian Lynagh wrote:
I would like to propose that we remove the Show and Eq superclasses from Num, i.e. change class (Eq a, Show a) => Num a where [...] to class Num a where [...]
10+ people indicated support for the proposal, while I think only 1 advocated an alternative course of action instead, so I have gone ahead and made the change. Thanks to all for your input. A few significant points made in the discussion: John Meacham pointed out you can write code in such a way that it works both before and after this change. Regarding the Eq constraint (for which there is perhaps some justification for the constraint) Jón Fairbairn and Ganesh Sittampalam pointed out that if we remove it, people wouldn't have to create useless Eq instances for types for which it is not possible to define a proper instance. A couple of people suggested putting an Eq constraint on the Integral class instead, as the toInteger method means it is always possible to define a sensible Eq instance. I haven't made this change, and I'm not sure what the motivation for it is, but if anyone thinks it would be worth doing then I suggest making a new proposal for it. Removing the Num superclass of Bits was also mentioned, but that would need its own proposal. Thanks Ian
participants (21)
-
Balazs Komuves
-
Ben Millwood
-
Brent Yorgey
-
Daniel Díaz Casanueva
-
Daniel Fischer
-
Edward Kmett
-
Ganesh Sittampalam
-
Gábor Lehel
-
Henning Thielemann
-
Herbert Valerio Riedel
-
Ian Lynagh
-
Ivan Lazar Miljenovic
-
Johan Tibell
-
John Meacham
-
Jon Fairbairn
-
Malcolm Wallace
-
Paterson, Ross
-
Sebastian Fischer
-
Sterling Clover
-
Tyson Whitehead
-
Yitzchak Gale