I guess what I'm saying  is that while you have a valid perspective,  such design changes don't really improve things for people who don't need floating point tools, and immediately makes things a lot more onerous for those who DO need them.

I think a more interesting matter is the lack of good userland visiblility into choices of rounding modes and having nice tools for estimate forwards/backwards error of computations.

Many computations (even with "exact") types, still have similar issues. But thats a fun topic about relative vs absolute error bounds etc that can wait for another time! :) 





On Fri, Sep 26, 2014 at 3:41 PM, Carter Schonwald <carter.schonwald@gmail.com> wrote:
for equational laws to be sensible requires a sensible notion of equality, the Eq for Floating point numbers is
meant for handling corner cases (eg: am i about to divide by zero), not "semantic/denotational equivalence"

Exact equality is fundamentally incorrect for finite precision mathematical computation. 
You typically want to have something like 

nearlyEq  tolerance a b = if distance a b <= tolerance then True else False

Floating point is geometry, not exact things 
https://hackage.haskell.org/package/ieee754-0.7.3/docs/Data-AEq.html
is a package that provides an approx equality notion.

Basically, floating points work the way they do because its a compromise that works decently for those who really need it.
If you dont need to use floating point, dont! :) 



On Fri, Sep 26, 2014 at 9:28 AM, Jason Choy <jjwchoy@gmail.com> wrote:
subject to certain caveats.  It's not unfair to say that
floating point multiplication is (nearly) associative
"within a few ulp".

I'm not disputing this.

However, you can't deny that this monoid law is broken for the floating point operations:

mappend x (mappend y z) = mappend (mappend x y) z

Perhaps I'm being pedantic, but this law should hold for all x, y, z, and it clearly doesn't.