Sum and Product do not respect the Monoid laws

The Monoid instances for Sum and Product do not respect the Monoid laws. The instances are: Num a => instance Monoid (Sum a) Num a => instance Monoid (Product a) Float and Double are instances of the Num typeclass, however, floating point addition and multiplication are not associative. Here's an example:
(Sum 1234.567 `mappend` Sum 45.67834) `mappend` Sum 0.0004 Sum {getSum = 1280.2457399999998}
Sum 1234.567 `mappend` (Sum 45.67834 `mappend` Sum 0.0004) Sum {getSum = 1280.24574}
Shouldn't these instances be constrained to Integral types? Jason

Restricting to Integral only would be overconstraining - Sum Rational
is perfectly well behaved as a monoid. I could go either way on
whether Double and Float should be excluded - I'm actually slightly
sympathetic to the notion that they shouldn't even be Num.
On Fri, Sep 19, 2014 at 6:58 AM, Jason Choy
The Monoid instances for Sum and Product do not respect the Monoid laws.
The instances are:
Num a => instance Monoid (Sum a) Num a => instance Monoid (Product a)
Float and Double are instances of the Num typeclass, however, floating point addition and multiplication are not associative. Here's an example:
(Sum 1234.567 `mappend` Sum 45.67834) `mappend` Sum 0.0004 Sum {getSum = 1280.2457399999998}
Sum 1234.567 `mappend` (Sum 45.67834 `mappend` Sum 0.0004) Sum {getSum = 1280.24574}
Shouldn't these instances be constrained to Integral types?
Jason
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Indeed,
Also Floating point numbers are *NOT* real numbers, they are approximate
points on the real line that we pretend are exact reals but really are a
very different geometry all together! :)
If you want to get extra precision in a floating point computation in a way
avoids the "discrepancy" when permuting the numbers, the compensated
library by Edward Kmett http://hackage.haskell.org/package/compensated lets
you easily double or quadruple the number of bits of precision you get in
sums and products, which makes a lot of these problems go way quite nicely!
Floats and Doubles are not exact numbers, dont use them when you expect
things to behave "exact". NB that even if you have *exact* numbers, the
exact same precision issues will still apply to pretty much any computation
thats interesting (ignoring what the definition of interesting is). Try
defining things like \ x -> SquareRoot x or \x-> e^x on the rational
numbers! Questions of precision still creep in!
So I guess phrased another way, a lot of the confusion / challenge in
writing floating point programs lies in understanding the representation,
its limits, and the ways in which it will implicitly manage that precision
tracking book keeping for you.
Computational mathematics is a really rich and complicated topic! Theres
many more (valid and different) constructive models of different classes of
numbers than you'd expect!
On Fri, Sep 19, 2014 at 7:31 AM, David Thomas
Restricting to Integral only would be overconstraining - Sum Rational is perfectly well behaved as a monoid. I could go either way on whether Double and Float should be excluded - I'm actually slightly sympathetic to the notion that they shouldn't even be Num.
The Monoid instances for Sum and Product do not respect the Monoid laws.
The instances are:
Num a => instance Monoid (Sum a) Num a => instance Monoid (Product a)
Float and Double are instances of the Num typeclass, however, floating
On Fri, Sep 19, 2014 at 6:58 AM, Jason Choy
wrote: point addition and multiplication are not associative. Here's an example:
(Sum 1234.567 `mappend` Sum 45.67834) `mappend` Sum 0.0004 Sum {getSum = 1280.2457399999998}
Sum 1234.567 `mappend` (Sum 45.67834 `mappend` Sum 0.0004) Sum {getSum = 1280.24574}
Shouldn't these instances be constrained to Integral types?
Jason
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 20/09/2014, at 5:26 AM, Carter Schonwald wrote:
Indeed,
Also Floating point numbers are NOT real numbers, they are approximate points on the real line that we pretend are exact reals but really are a very different geometry all together! :)
Floating point numbers are *PRECISE* numbers with approximate *OPERATIONS*. This is the way they are defined in the IEEE standard and its successors; this is the way they are defined in LIA-1 and its successors. If you do not understand that it is the OPERATIONS that are approximate, not the numbers, you have not yet begun to understand floating point arithmetic.
Floats and Doubles are not exact numbers, dont use them when you expect things to behave "exact". NB that even if you have *exact* numbers, the exact same precision issues will still apply to pretty much any computation thats interesting (ignoring what the definition of interesting is). Try defining things like \ x -> SquareRoot x or \x-> e^x on the rational numbers! Questions of precision still creep in!
You're not talking about precision here but about approximation. And you can simply work with finite representations of algebraic numbers. I have a Smalltalk class that implements QuadraticSurds so that I can represent (1 + sqrt 5)/2 *exactly*. You can even compare QuadraticSurds with the same surd exactly. (This all works so much better in Haskell, where you can make the "5" part a type-level parameter.) Since e is not a rational number, it's not terribly interesting that e^x (usually) isn't when x is rational. If we want to compute with a richer range of numbers than the rationals, we can do that. We could, for example, compute with continued fractions. Haskell makes that a small matter of programming, and I expect someone has already done it.
So I guess phrased another way, a lot of the confusion / challenge in writing floating point programs lies in understanding the representation, its limits, and the ways in which it will implicitly manage that precision tracking book keeping for you.
Exactly so. There are even people who think the representation is approximate instead of the operations! For things like doubled-double, it really matters that the numbers are precise and combine in predictable ways. Interestingly, while floating point addition and multiplication are not associative, they are not wildly or erratically non-associative, and it is possible to reason about the results of operations.

well said! and thanks for taking the time to clarify some of the
imprecision in how i was trying to answer the earlier points
On Tue, Sep 23, 2014 at 3:06 AM, Richard A. O'Keefe
On 20/09/2014, at 5:26 AM, Carter Schonwald wrote:
Indeed,
Also Floating point numbers are NOT real numbers, they are approximate points on the real line that we pretend are exact reals but really are a very different geometry all together! :)
Floating point numbers are *PRECISE* numbers with approximate *OPERATIONS*. This is the way they are defined in the IEEE standard and its successors; this is the way they are defined in LIA-1 and its successors. If you do not understand that it is the OPERATIONS that are approximate, not the numbers, you have not yet begun to understand floating point arithmetic.
Floats and Doubles are not exact numbers, dont use them when you expect things to behave "exact". NB that even if you have *exact* numbers, the exact same precision issues will still apply to pretty much any computation thats interesting (ignoring what the definition of interesting is). Try defining things like \ x -> SquareRoot x or \x-> e^x on the rational numbers! Questions of precision still creep in!
You're not talking about precision here but about approximation. And you can simply work with finite representations of algebraic numbers. I have a Smalltalk class that implements QuadraticSurds so that I can represent (1 + sqrt 5)/2 *exactly*. You can even compare QuadraticSurds with the same surd exactly. (This all works so much better in Haskell, where you can make the "5" part a type-level parameter.)
Since e is not a rational number, it's not terribly interesting that e^x (usually) isn't when x is rational.
If we want to compute with a richer range of numbers than the rationals, we can do that. We could, for example, compute with continued fractions. Haskell makes that a small matter of programming, and I expect someone has already done it.
So I guess phrased another way, a lot of the confusion / challenge in writing floating point programs lies in understanding the representation, its limits, and the ways in which it will implicitly manage that precision tracking book keeping for you.
Exactly so. There are even people who think the representation is approximate instead of the operations! For things like doubled-double, it really matters that the numbers are precise and combine in predictable ways.
Interestingly, while floating point addition and multiplication are not associative, they are not wildly or erratically non-associative, and it is possible to reason about the results of operations.

well said and thanks for taking the time to clarify some of the imprecision
in what I was saying!
On Tue, Sep 23, 2014 at 3:06 AM, Richard A. O'Keefe
On 20/09/2014, at 5:26 AM, Carter Schonwald wrote:
Indeed,
Also Floating point numbers are NOT real numbers, they are approximate points on the real line that we pretend are exact reals but really are a very different geometry all together! :)
Floating point numbers are *PRECISE* numbers with approximate *OPERATIONS*. This is the way they are defined in the IEEE standard and its successors; this is the way they are defined in LIA-1 and its successors. If you do not understand that it is the OPERATIONS that are approximate, not the numbers, you have not yet begun to understand floating point arithmetic.
Floats and Doubles are not exact numbers, dont use them when you expect things to behave "exact". NB that even if you have *exact* numbers, the exact same precision issues will still apply to pretty much any computation thats interesting (ignoring what the definition of interesting is). Try defining things like \ x -> SquareRoot x or \x-> e^x on the rational numbers! Questions of precision still creep in!
You're not talking about precision here but about approximation. And you can simply work with finite representations of algebraic numbers. I have a Smalltalk class that implements QuadraticSurds so that I can represent (1 + sqrt 5)/2 *exactly*. You can even compare QuadraticSurds with the same surd exactly. (This all works so much better in Haskell, where you can make the "5" part a type-level parameter.)
Since e is not a rational number, it's not terribly interesting that e^x (usually) isn't when x is rational.
If we want to compute with a richer range of numbers than the rationals, we can do that. We could, for example, compute with continued fractions. Haskell makes that a small matter of programming, and I expect someone has already done it.
So I guess phrased another way, a lot of the confusion / challenge in writing floating point programs lies in understanding the representation, its limits, and the ways in which it will implicitly manage that precision tracking book keeping for you.
Exactly so. There are even people who think the representation is approximate instead of the operations! For things like doubled-double, it really matters that the numbers are precise and combine in predictable ways.
Interestingly, while floating point addition and multiplication are not associative, they are not wildly or erratically non-associative, and it is possible to reason about the results of operations.

Thanks for the responses.
I appreciate that floating point arithmetic is imprecise, and anyone using
it should be aware of the ramifications, however it still feels strange to
me that Sum Double claims to be a monoid. Is this is a case where
convenience is favoured over correctness? Are there any other monoids with
an "almost associative" binary operator?
I think I would feel a little less uneasy if the types were called
ImpreciseSum and ImpreciseProduct when it is not associative.
On 23 September 2014 17:30, Carter Schonwald
well said and thanks for taking the time to clarify some of the imprecision in what I was saying!
On Tue, Sep 23, 2014 at 3:06 AM, Richard A. O'Keefe
wrote: On 20/09/2014, at 5:26 AM, Carter Schonwald wrote:
Indeed,
Also Floating point numbers are NOT real numbers, they are approximate points on the real line that we pretend are exact reals but really are a very different geometry all together! :)
Floating point numbers are *PRECISE* numbers with approximate *OPERATIONS*. This is the way they are defined in the IEEE standard and its successors; this is the way they are defined in LIA-1 and its successors. If you do not understand that it is the OPERATIONS that are approximate, not the numbers, you have not yet begun to understand floating point arithmetic.
Floats and Doubles are not exact numbers, dont use them when you expect things to behave "exact". NB that even if you have *exact* numbers, the exact same precision issues will still apply to pretty much any computation thats interesting (ignoring what the definition of interesting is). Try defining things like \ x -> SquareRoot x or \x-> e^x on the rational numbers! Questions of precision still creep in!
You're not talking about precision here but about approximation. And you can simply work with finite representations of algebraic numbers. I have a Smalltalk class that implements QuadraticSurds so that I can represent (1 + sqrt 5)/2 *exactly*. You can even compare QuadraticSurds with the same surd exactly. (This all works so much better in Haskell, where you can make the "5" part a type-level parameter.)
Since e is not a rational number, it's not terribly interesting that e^x (usually) isn't when x is rational.
If we want to compute with a richer range of numbers than the rationals, we can do that. We could, for example, compute with continued fractions. Haskell makes that a small matter of programming, and I expect someone has already done it.
So I guess phrased another way, a lot of the confusion / challenge in writing floating point programs lies in understanding the representation, its limits, and the ways in which it will implicitly manage that precision tracking book keeping for you.
Exactly so. There are even people who think the representation is approximate instead of the operations! For things like doubled-double, it really matters that the numbers are precise and combine in predictable ways.
Interestingly, while floating point addition and multiplication are not associative, they are not wildly or erratically non-associative, and it is possible to reason about the results of operations.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

No. It is associative, just not in the way you expect. 😄
On Thursday, September 25, 2014, Jason Choy
Thanks for the responses.
I appreciate that floating point arithmetic is imprecise, and anyone using it should be aware of the ramifications, however it still feels strange to me that Sum Double claims to be a monoid. Is this is a case where convenience is favoured over correctness? Are there any other monoids with an "almost associative" binary operator?
I think I would feel a little less uneasy if the types were called ImpreciseSum and ImpreciseProduct when it is not associative.
On 23 September 2014 17:30, Carter Schonwald
javascript:_e(%7B%7D,'cvml','carter.schonwald@gmail.com');> wrote: well said and thanks for taking the time to clarify some of the imprecision in what I was saying!
On Tue, Sep 23, 2014 at 3:06 AM, Richard A. O'Keefe
javascript:_e(%7B%7D,'cvml','ok@cs.otago.ac.nz');> wrote: On 20/09/2014, at 5:26 AM, Carter Schonwald wrote:
Indeed,
Also Floating point numbers are NOT real numbers, they are approximate points on the real line that we pretend are exact reals but really are a very different geometry all together! :)
Floating point numbers are *PRECISE* numbers with approximate *OPERATIONS*. This is the way they are defined in the IEEE standard and its successors; this is the way they are defined in LIA-1 and its successors. If you do not understand that it is the OPERATIONS that are approximate, not the numbers, you have not yet begun to understand floating point arithmetic.
Floats and Doubles are not exact numbers, dont use them when you expect things to behave "exact". NB that even if you have *exact* numbers, the exact same precision issues will still apply to pretty much any computation thats interesting (ignoring what the definition of interesting is). Try defining things like \ x -> SquareRoot x or \x-> e^x on the rational numbers! Questions of precision still creep in!
You're not talking about precision here but about approximation. And you can simply work with finite representations of algebraic numbers. I have a Smalltalk class that implements QuadraticSurds so that I can represent (1 + sqrt 5)/2 *exactly*. You can even compare QuadraticSurds with the same surd exactly. (This all works so much better in Haskell, where you can make the "5" part a type-level parameter.)
Since e is not a rational number, it's not terribly interesting that e^x (usually) isn't when x is rational.
If we want to compute with a richer range of numbers than the rationals, we can do that. We could, for example, compute with continued fractions. Haskell makes that a small matter of programming, and I expect someone has already done it.
So I guess phrased another way, a lot of the confusion / challenge in writing floating point programs lies in understanding the representation, its limits, and the ways in which it will implicitly manage that precision tracking book keeping for you.
Exactly so. There are even people who think the representation is approximate instead of the operations! For things like doubled-double, it really matters that the numbers are precise and combine in predictable ways.
Interestingly, while floating point addition and multiplication are not associative, they are not wildly or erratically non-associative, and it is possible to reason about the results of operations.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org javascript:_e(%7B%7D,'cvml','Haskell-Cafe@haskell.org'); http://www.haskell.org/mailman/listinfo/haskell-cafe

On 25 September 2014 18:54, Carter Schonwald
No. It is associative, just not in the way you expect. 😄
From What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg: "Due to roundoff errors, the associative laws of algebra do not necessarily hold for floating-point numbers. For example, the expression (x+y)+z has a totally different answer than x+(y+z) when x = 10^30, y = -10^30 and z = 1 (it is 1 in the former case, 0 in the latter)."
On Thursday, September 25, 2014, Jason Choy
Thanks for the responses.
I appreciate that floating point arithmetic is imprecise, and anyone using it should be aware of the ramifications, however it still feels strange to me that Sum Double claims to be a monoid. Is this is a case where convenience is favoured over correctness? Are there any other monoids with an "almost associative" binary operator?
I think I would feel a little less uneasy if the types were called ImpreciseSum and ImpreciseProduct when it is not associative.
On 23 September 2014 17:30, Carter Schonwald
wrote: well said and thanks for taking the time to clarify some of the imprecision in what I was saying!
On Tue, Sep 23, 2014 at 3:06 AM, Richard A. O'Keefe
wrote: On 20/09/2014, at 5:26 AM, Carter Schonwald wrote:
Indeed,
Also Floating point numbers are NOT real numbers, they are approximate points on the real line that we pretend are exact reals but really are a very different geometry all together! :)
Floating point numbers are *PRECISE* numbers with approximate *OPERATIONS*. This is the way they are defined in the IEEE standard and its successors; this is the way they are defined in LIA-1 and its successors. If you do not understand that it is the OPERATIONS that are approximate, not the numbers, you have not yet begun to understand floating point arithmetic.
Floats and Doubles are not exact numbers, dont use them when you expect things to behave "exact". NB that even if you have *exact* numbers, the exact same precision issues will still apply to pretty much any computation thats interesting (ignoring what the definition of interesting is). Try defining things like \ x -> SquareRoot x or \x-> e^x on the rational numbers! Questions of precision still creep in!
You're not talking about precision here but about approximation. And you can simply work with finite representations of algebraic numbers. I have a Smalltalk class that implements QuadraticSurds so that I can represent (1 + sqrt 5)/2 *exactly*. You can even compare QuadraticSurds with the same surd exactly. (This all works so much better in Haskell, where you can make the "5" part a type-level parameter.)
Since e is not a rational number, it's not terribly interesting that e^x (usually) isn't when x is rational.
If we want to compute with a richer range of numbers than the rationals, we can do that. We could, for example, compute with continued fractions. Haskell makes that a small matter of programming, and I expect someone has already done it.
So I guess phrased another way, a lot of the confusion / challenge in writing floating point programs lies in understanding the representation, its limits, and the ways in which it will implicitly manage that precision tracking book keeping for you.
Exactly so. There are even people who think the representation is approximate instead of the operations! For things like doubled-double, it really matters that the numbers are precise and combine in predictable ways.
Interestingly, while floating point addition and multiplication are not associative, they are not wildly or erratically non-associative, and it is possible to reason about the results of operations.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 26/09/2014, at 10:33 PM, Jason Choy wrote:
On 25 September 2014 18:54, Carter Schonwald
wrote: No. It is associative, just not in the way you expect. 😄 From What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg: "Due to roundoff errors, the associative laws of algebra do not necessarily hold for floating-point numbers. For example, the expression (x+y)+z has a totally different answer than x+(y+z) when x = 10^30, y = -10^30 and z = 1 (it is 1 in the former case, 0 in the latter)."
Knuth Volume 2, second edition: (u [*] v) [*] w = uvw(1+d1)(1+d2) u [*] (v [*] w) = uvw(1+d3)(1+d4) where each |di| < 1/2 b**(1-p), so (u [*] v) [*] w --------------- = 1 + d u [*] (v [*] w) where |d| < 2 b**(1-p)/(1 - 1/2 b**(1-p))**2 = 2 ulp/(1 - 1/2 ulp)**2 subject to certain caveats. It's not unfair to say that floating point multiplication is (nearly) associative "within a few ulp". Goldberg's example involves addition, not multiplication. (Checking addition of numbers *with the same sign* is left as an exercise for the reader.)

subject to certain caveats. It's not unfair to say that floating point multiplication is (nearly) associative "within a few ulp".
I'm not disputing this. However, you can't deny that this monoid law is broken for the floating point operations: mappend x (mappend y z) = mappend (mappend x y) z Perhaps I'm being pedantic, but this law should hold for all x, y, z, and it clearly doesn't.

for equational laws to be sensible requires a sensible notion of equality,
the Eq for Floating point numbers is
meant for handling corner cases (eg: am i about to divide by zero), not
"semantic/denotational equivalence"
Exact equality is fundamentally incorrect for finite precision mathematical
computation.
You typically want to have something like
nearlyEq tolerance a b = if distance a b <= tolerance then True else False
Floating point is geometry, not exact things
https://hackage.haskell.org/package/ieee754-0.7.3/docs/Data-AEq.html
is a package that provides an approx equality notion.
Basically, floating points work the way they do because its a compromise
that works decently for those who really need it.
If you dont need to use floating point, dont! :)
On Fri, Sep 26, 2014 at 9:28 AM, Jason Choy
subject to certain caveats. It's not unfair to say that
floating point multiplication is (nearly) associative "within a few ulp".
I'm not disputing this.
However, you can't deny that this monoid law is broken for the floating point operations:
mappend x (mappend y z) = mappend (mappend x y) z
Perhaps I'm being pedantic, but this law should hold for all x, y, z, and it clearly doesn't.

I guess what I'm saying is that while you have a valid perspective, such design changes don't really improve things for people who don't need floating point tools, and immediately makes things a lot more onerous for those who DO need them. I think a more interesting matter is the lack of good userland visiblility into choices of rounding modes and having nice tools for estimate forwards/backwards error of computations. Many computations (even with "exact") types, still have similar issues. But thats a fun topic about relative vs absolute error bounds etc that can wait for another time! :) On Fri, Sep 26, 2014 at 3:41 PM, Carter Schonwald < carter.schonwald@gmail.com> wrote:
for equational laws to be sensible requires a sensible notion of equality, the Eq for Floating point numbers is meant for handling corner cases (eg: am i about to divide by zero), not "semantic/denotational equivalence"
Exact equality is fundamentally incorrect for finite precision mathematical computation. You typically want to have something like
nearlyEq tolerance a b = if distance a b <= tolerance then True else False
Floating point is geometry, not exact things https://hackage.haskell.org/package/ieee754-0.7.3/docs/Data-AEq.html is a package that provides an approx equality notion.
Basically, floating points work the way they do because its a compromise that works decently for those who really need it. If you dont need to use floating point, dont! :)
On Fri, Sep 26, 2014 at 9:28 AM, Jason Choy
wrote: subject to certain caveats. It's not unfair to say that
floating point multiplication is (nearly) associative "within a few ulp".
I'm not disputing this.
However, you can't deny that this monoid law is broken for the floating point operations:
mappend x (mappend y z) = mappend (mappend x y) z
Perhaps I'm being pedantic, but this law should hold for all x, y, z, and it clearly doesn't.

for equational laws to be sensible requires a sensible notion of equality, the Eq for Floating point numbers is
could it not be then that for floating points to be a monoid you must specify a satisfying notion of equality? (well, i guess nothing is stopping anyone from doing that themselves; and your point is that simply not having floats as a monoid is somehow "bad"?) On Sat, Sep 27, 2014 at 5:41 AM, Carter Schonwald < carter.schonwald@gmail.com> wrote:
for equational laws to be sensible requires a sensible notion of equality, the Eq for Floating point numbers is meant for handling corner cases (eg: am i about to divide by zero), not "semantic/denotational equivalence"
Exact equality is fundamentally incorrect for finite precision mathematical computation. You typically want to have something like
nearlyEq tolerance a b = if distance a b <= tolerance then True else False
Floating point is geometry, not exact things https://hackage.haskell.org/package/ieee754-0.7.3/docs/Data-AEq.html is a package that provides an approx equality notion.
Basically, floating points work the way they do because its a compromise that works decently for those who really need it. If you dont need to use floating point, dont! :)
On Fri, Sep 26, 2014 at 9:28 AM, Jason Choy
wrote: subject to certain caveats. It's not unfair to say that
floating point multiplication is (nearly) associative "within a few ulp".
I'm not disputing this.
However, you can't deny that this monoid law is broken for the floating point operations:
mappend x (mappend y z) = mappend (mappend x y) z
Perhaps I'm being pedantic, but this law should hold for all x, y, z, and it clearly doesn't.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Noon Silk, ن https://sites.google.com/site/noonsilk/ "Every morning when I wake up, I experience an exquisite joy — the joy of being this signature."

For anything to be a Monoid (or any type class with laws), you implicitly
have a definition of equivalence you want your laws to use. And for many
classes, those laws are using an equivalence not definable using Eq. A
good example Is monad! You can not define Eq instances for arbitrary a-> m
b. We can still define and talk about lawful monads.
Point of order though, no Num a instance has a Monoid a instance.
Instead Sum a and Product a are the Monoid Instances.
On Sep 26, 2014 9:46 PM, "Noon Silk"
for equational laws to be sensible requires a sensible notion of equality, the Eq for Floating point numbers is
could it not be then that for floating points to be a monoid you must specify a satisfying notion of equality? (well, i guess nothing is stopping anyone from doing that themselves; and your point is that simply not having floats as a monoid is somehow "bad"?)
On Sat, Sep 27, 2014 at 5:41 AM, Carter Schonwald < carter.schonwald@gmail.com> wrote:
for equational laws to be sensible requires a sensible notion of equality, the Eq for Floating point numbers is meant for handling corner cases (eg: am i about to divide by zero), not "semantic/denotational equivalence"
Exact equality is fundamentally incorrect for finite precision mathematical computation. You typically want to have something like
nearlyEq tolerance a b = if distance a b <= tolerance then True else False
Floating point is geometry, not exact things https://hackage.haskell.org/package/ieee754-0.7.3/docs/Data-AEq.html is a package that provides an approx equality notion.
Basically, floating points work the way they do because its a compromise that works decently for those who really need it. If you dont need to use floating point, dont! :)
On Fri, Sep 26, 2014 at 9:28 AM, Jason Choy
wrote: subject to certain caveats. It's not unfair to say that
floating point multiplication is (nearly) associative "within a few ulp".
I'm not disputing this.
However, you can't deny that this monoid law is broken for the floating point operations:
mappend x (mappend y z) = mappend (mappend x y) z
Perhaps I'm being pedantic, but this law should hold for all x, y, z, and it clearly doesn't.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Noon Silk, ن
https://sites.google.com/site/noonsilk/
"Every morning when I wake up, I experience an exquisite joy — the joy of being this signature."

On Fri, Sep 26, 2014 at 11:38:03PM -0400, Carter Schonwald wrote:
For anything to be a Monoid (or any type class with laws), you implicitly have a definition of equivalence you want your laws to use. And for many classes, those laws are using an equivalence not definable using Eq. A good example Is monad! You can not define Eq instances for arbitrary a-> m b. We can still define and talk about lawful monads.
Definable in Haskell or not, this supposed fuzzy equality for Double won't be transitive will it? It sounds like it will just raise another problem. Tom

Its approximately transitive. Distances always obey the triangle Inequality. Good enough for geometry. Also the emphasis is on the geometry / distance.

On 14-09-27 10:02 AM, Carter Schonwald wrote:
Its approximately transitive. Distances always obey the triangle Inequality. Good enough for geometry. Also the emphasis is on the geometry / distance.
You've got me curious. How do we define "approximately transitive"? (I am not one of those who want to rid floating point of Eq and Ord, or rid of floating point altogether.)

To replicate the explanation i gave on IRC,
(to use subscripting in pseudo haskell)
for any type where we can define some sort of distance (induced via a norm
via dist a b = norm (a -b ), ignoring overflow issues)
lets define a quantitative version of equality
a ==_{r} b = if dist a b <= r then True else False
then we use the triangle inequality (dist a c <= dist a b + dist b c)
to get the following quantitative analogue of transitivity
a ==_{r1} b && b ==_{r2} c IMPLIES a ==_{r1 + r2} c
this is a bit more general (and weaker) than the notion of equality that
we're accustomed to, but still a pretty natural idea.
you can consider more general things than using the + function too, like
min/max/sum of squares etc. But I leave that as a fun exercise for the
reader.
this gets into talking about reasoning about things using tools from
Analysis rather than Algebra, and that sort of modeling is pretty powerful.
http://en.wikipedia.org/wiki/Normed_vector_space and the associated page on
Hilber tSpaces can be a useful starting point i guess.
I guess my point is Analysis is a very powerful far reaching mathematical
tool, and only considering models that elide that is ... :)
On Sat, Sep 27, 2014 at 1:12 PM, Albert Y. C. Lai
On 14-09-27 10:02 AM, Carter Schonwald wrote:
Its approximately transitive. Distances always obey the triangle Inequality. Good enough for geometry. Also the emphasis is on the geometry / distance.
You've got me curious. How do we define "approximately transitive"?
(I am not one of those who want to rid floating point of Eq and Ord, or rid of floating point altogether.)
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
participants (7)
-
Albert Y. C. Lai
-
Carter Schonwald
-
David Thomas
-
Jason Choy
-
Noon Silk
-
Richard A. O'Keefe
-
Tom Ellis