Proposal: Add log1p and expm1 to GHC.Float.Floating

log1p and expm1 are C standard library functions that are important for work with exponentials and logarithms. I propose adding them to the Floating class where it is defined in GHC.Float . We do not have to export these from Prelude. My knee-jerk reaction is that we probably shouldn't. The names are kind of awful, but are standard across the rest of the industry. We already have a precedent of not exporting clutter in the classes in the existing Data.Functor containing (<$), but it not currently coming into scope from the Prelude. They are critical for any reasonably precise work with logarithms of values near 1, and of exponentials for small values of *x*, and it is somewhat embarrassing explaining to someone coming from the outside with a numerical background whom expects to find them to exist why we don't have them. These arise all over the place in any work on probabilities in log-scale. Backgrounder: Consider
exp 0.0000003 1.000000300000045
As the argument x get small, this gets very close to 1 + x. However, that leading 1 means you've consumed most of the precision of the floating point number you are using. 6 decimal places is ~18 bits of your significand that are just gone because of bad math. If we subtract out the leading term after it has destroyed all of our precision it is too late.
exp 0.0000003 - 1 3.0000004502817035e-7
has lost a lot of precision relative to:
expm1 0.0000003 3.000000450000045e-7
Now every decimal place we get closer to 0 doesn't destroy a decimal place of precision! Similar issues arise with logs of probabilities near 1. If you are forced to use log, as your probability gets closer to 1 from below you throw away most of your accuracy just by encoding the argument to the function with the same kind of error rate. Here is straw man documentation ripped from my log-domain package that is probably way too technical, but serves as a starting point for discussion. class ... => Floating a where -- | The Taylor series for @'exp' x@ is given by -- -- @ -- 'exp' x = 1 + x + x^2/2! + x^3/3! ... -- @ -- -- When @x@ is small, the leading @1@ consumes virtually all of the available precision, -- because subsequent terms are very small. -- -- This computes: -- -- @ -- exp x - 1 = x + x^2/2! + .. -- @ -- -- For many types can afford you a great deal of additional precision if you move -- things around algebraically to provide the 1 by other means. expm1 :: Floating a => a -> a expm1 x = exp x - 1 -- | Computes @log(1 + x)@ -- -- This is away from 0 so the Taylor series is defined, but it also provides an inverse to 'expm1'. -- -- This can provide much more accurate answers for logarithms of numbers close to 1 (x near 0). -- -- @ -- log1p (expm1 x) = log (1 + exp x - 1) = log (exp x) -- @ log1p :: Floating a => a -> a log1p x = log (1 + x) They can be given definitions in terms of the standard C library functions for the CFloat, CDouble, Float and Double, either by foreign import or adding a pair of foreign prims. Finally, here is a robust implementation for Data.Complex from the same package that properly deals with the subtleties involved in not losing precision. expm1 x@(a :+ b) | a*a + b*b < 1, u <- expm1 a, v <- sin (b/2), w <- -2*v*v = (u*w+u+w) :+ (u+1)*sin b | otherwise = exp x - 1 {-# INLINE expm1 #-} log1p x@(a :+ b) | abs a < 0.5 && abs b < 0.5, u <- 2*a+a*a+b*b = log1p (u/(1+sqrt (u+1))) :+ atan2 (1 + a) b | otherwise = log (1 + x) {-# INLINE log1p #-} Discussion Period: 2 weeks -Edward Kmett

On 14-04-17 01:15 PM, Edward Kmett wrote:
log1p and expm1 are C standard library functions that are important for work with exponentials and logarithms.
I propose adding them to the Floating class where it is defined in GHC.Float.
We do not have to export these from Prelude. My knee-jerk reaction is that we probably shouldn't. The names are kind of awful, but are standard across the rest of the industry. We already have a precedent of not exporting clutter in the classes in the existing Data.Functorcontaining (<$), but it not currently coming into scope from the Prelude.
+1 to this proposal as a whole. Jacques

+1 :)
On Thu, Apr 17, 2014 at 1:29 PM, Jacques Carette
On 14-04-17 01:15 PM, Edward Kmett wrote:
log1p and expm1 are C standard library functions that are important for work with exponentials and logarithms.
I propose adding them to the Floating class where it is defined in GHC.Float.
We do not have to export these from Prelude. My knee-jerk reaction is that we probably shouldn't. The names are kind of awful, but are standard across the rest of the industry. We already have a precedent of not exporting clutter in the classes in the existing Data.Functor containing (<$), but it not currently coming into scope from the Prelude.
+1 to this proposal as a whole.
Jacques
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

Am 17.04.2014 19:15, schrieb Edward Kmett:
log1p and expm1 are C standard library functions that are important for work with exponentials and logarithms.
I propose adding them to the Floating class where it is defined in GHC.Float.
No question, these functions are useful. But I think there should be two proposals: 1) Add log1pFloat, log1pDouble, expm1Float, expm1Double to GHC.Float 2) Extend Floating class with log1p and expm1 methods. I think the first item is unproblematic since it is a simple addition. Since FPUs sometimes directly implement log1p and expm1 functions, I wonder whether GHC also should support the according machine instructions. E.g. x86 has F2XM1 and FYL2XP1 and good old MC68882 had FETOXM1 and FLOGNP1. The second item means to alter the Floating class which affects all custom Floating instances. I think one should add default implementations. They don't have an numerical advantage but they save programmers from code breakage.
We do not have to export these from Prelude. My knee-jerk reaction is that we probably shouldn't.
Not exporting them from Prelude still means to export them from GHC.Float, right? I mean, users must be able to implement these methods for custom types like extended precision floating point numbers as provided by libqd. But there should also be a non-GHC module that exports the full Floating class.

On Thu, Apr 17, 2014 at 2:48 PM, Henning Thielemann < schlepptop@henning-thielemann.de> wrote:
Am 17.04.2014 19:15, schrieb Edward Kmett:
log1p and expm1 are C standard library functions that are important for
work with exponentials and logarithms.
I propose adding them to the Floating class where it is defined in GHC.Float.
No question, these functions are useful. But I think there should be two proposals:
1) Add log1pFloat, log1pDouble, expm1Float, expm1Double to GHC.Float 2) Extend Floating class with log1p and expm1 methods.
I think the first item is unproblematic since it is a simple addition. Since FPUs sometimes directly implement log1p and expm1 functions, I wonder whether GHC also should support the according machine instructions. E.g. x86 has F2XM1 and FYL2XP1 and good old MC68882 had FETOXM1 and FLOGNP1.
The second item means to alter the Floating class which affects all custom Floating instances. I think one should add default implementations. They don't have an numerical advantage but they save programmers from code breakage.
I included the default definitions in code snippet in the proposal, so user code that remains unaware of them would be unaffected, while packages like compensated, or a wrapper around libqd could implement them as needed. expm1 :: Floating a => a -> a expm1 x = exp x - 1 log1p :: Floating a => a -> a log1p x = log (1 + x) My proposal is *very much* about adding them to the class for Floating. Since they have legal default definitions it really costs nothing to just do the right thing here. Adding the primops is trivially done by end-users in a library and has been my solution up until now, but if that was all that is done, you have to overload between them by making some needless extra class and 'whether you are using decent numerics' leaks into your type. Not doing 2 would force long term needless hairsplitting and code duplication for no reason. Consequently, I deliberately did not split up the proposal in this matter. We do not have to export these from Prelude. My knee-jerk reaction is
that we probably shouldn't.
Not exporting them from Prelude still means to export them from GHC.Float, right? I mean, users must be able to implement these methods for custom types like extended precision floating point numbers as provided by libqd. But there should also be a non-GHC module that exports the full Floating class.
Yes, I am proposing including them in the export of the class from GHC.Float. I'm also fully on board with exporting the full Floating class from some non-GHC module in base, say, something like Numeric.Floating. -Edward

On 2014-04-17 15:08, Edward Kmett wrote:
On Thu, Apr 17, 2014 at 2:48 PM, Henning Thielemann
mailto:schlepptop@henning-thielemann.de> wrote: I think one should add default implementations. They don't have an numerical advantage but they save programmers from code breakage.
I included the default definitions in code snippet in the proposal, so user code that remains unaware of them would be unaffected, while packages like compensated, or a wrapper around libqd could implement them as needed.
expm1 :: Floating a => a -> a expm1 x = exp x - 1
log1p :: Floating a => a -> a log1p x = log (1 + x) On the contrary, code that explicitly uses these functions is likely to need the precision. Defaults would cause subtle breakage.
-- Scott

With the defaults the code is never worse than it is forced to be right now and users do not need to create CPP blocked code to work around this addition. Without the defaults this becomes a much bigger request, as I'd be asking _every_ author of Floating to add CPP to their packages for a feature they never heard of and probably will never use, and in that situation we'd have to export it from Prelude. Given the trade-off between those factors I tend to favor adding defaults to not. -Edward On Sat, Apr 19, 2014 at 5:42 AM, Scott Turner <2haskell@pkturner.org> wrote:
On 2014-04-17 15:08, Edward Kmett wrote:
On Thu, Apr 17, 2014 at 2:48 PM, Henning Thielemann < schlepptop@henning-thielemann.de> wrote:
I think one should add default implementations. They don't have an numerical advantage but they save programmers from code breakage.
I included the default definitions in code snippet in the proposal, so user code that remains unaware of them would be unaffected, while packages like compensated, or a wrapper around libqd could implement them as needed.
expm1 :: Floating a => a -> a expm1 x = exp x - 1
log1p :: Floating a => a -> a log1p x = log (1 + x)
On the contrary, code that explicitly uses these functions is likely to need the precision. Defaults would cause subtle breakage.
-- Scott

On Sat, Apr 19, 2014 at 5:49 AM, Edward Kmett
With the defaults the code is never worse than it is forced to be right now and users do not need to create CPP blocked code to work around this addition.
I usually like defaults, and avoiding CPP is good, however with the defaults users will expect better code than they get. We aren't doing anyone any favors by introducing the possibility of silent floating-point precision loss from 'exp1m'. An "error" default would be better. Besides, the code would be worse than it's forced to be now. At least now users who care about this run headlong into the issue. If we provide exp1m and log1p, users who use those functions should get the advertised behavior, not loss of precision (I realize not all types would lose precision, but some will).
Without the defaults this becomes a much bigger request, as I'd be asking _every_ author of Floating to add CPP to their packages for a feature they never heard of and probably will never use, and in that situation we'd have to export it from Prelude.
It's perfectly fine to leave some methods blank; IMHO the resulting run-time error is better than an incorrect default. Plus, it's useful for library authors to know that the class has changed; if a default is provided everything will build properly and there is no compile-time indication that library authors should adjust their code. Originally I was +1 for everything except the defaults, but I'm reconsidering. If this is something that most Floating instance authors don't know about and probably won't ever use, do these functions really belong in that class? Why not make a separate class for fused algorithms? John L.
-Edward
On Sat, Apr 19, 2014 at 5:42 AM, Scott Turner <2haskell@pkturner.org>wrote:
On 2014-04-17 15:08, Edward Kmett wrote:
On Thu, Apr 17, 2014 at 2:48 PM, Henning Thielemann < schlepptop@henning-thielemann.de> wrote:
I think one should add default implementations. They don't have an numerical advantage but they save programmers from code breakage.
I included the default definitions in code snippet in the proposal, so user code that remains unaware of them would be unaffected, while packages like compensated, or a wrapper around libqd could implement them as needed.
expm1 :: Floating a => a -> a expm1 x = exp x - 1
log1p :: Floating a => a -> a log1p x = log (1 + x)
On the contrary, code that explicitly uses these functions is likely to need the precision. Defaults would cause subtle breakage.
-- Scott
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

The proposal is worded the way it is to get a strict monotonic improvement
over the status quo.
With them in the class it becomes possible to get the instances fixed. With
them outside of the class in some needless extra hair-splitting class added
on later like we have to use today, then someone who would otherwise just
use them is needlessly hoist on the dilemma of using a more restrictive
class and just accepting the fact that they can't work with third party
numeric types for the most part at all, or reverting to the poor version of
the numerics to widen their audience.
This leads to the equivalent of needless divisions between 'traverse' vs.
'mapM' forever.
With defaults you are never worse off than you are today, but defaults you
*always* have to worry about whether you should use them.
Let's look at it another way.
By putting in defaults the costs of the proposal are borne by the people
who want to use the new feature.
Moreover, if we should decide to adopt wren's half-suggestion of continuing
to expand support for other numerical primitives that have broad support we
could do so without great deal of fanfare, and the handful of people who
actually do numeric computation can talk to the handful of people who write
numeric instances that high up the foodchain to get the important ones
fixed in packages like vector-space, linear, diagrams, etc.
Without defaults everyone who ever wrote a Floating instance by hand would
need to know about log1p or wren's log1mexp and they would be forced into
using CPP in their code to work around a feature they don't care about and
if they couldn't be bothered then the user who wanted a bit of extra
precision now just starts crashing. The risk averse would simply take the
path with worse precision or get shoved back into the world of code
duplication and 'mapM' vs 'traverse'.
I know for me personally it would force me to double the amount of numeric
code I write, just to maximize my audience. I really don't want to go
there. I just want to be able to call the function I mean, and to be able
to talk to the right people to make it do the right thing.
-Edward
On Sun, Apr 20, 2014 at 7:32 PM, John Lato
On Sat, Apr 19, 2014 at 5:49 AM, Edward Kmett
wrote: With the defaults the code is never worse than it is forced to be right now and users do not need to create CPP blocked code to work around this addition.
I usually like defaults, and avoiding CPP is good, however with the defaults users will expect better code than they get. We aren't doing anyone any favors by introducing the possibility of silent floating-point precision loss from 'exp1m'. An "error" default would be better.
Besides, the code would be worse than it's forced to be now. At least now users who care about this run headlong into the issue. If we provide exp1m and log1p, users who use those functions should get the advertised behavior, not loss of precision (I realize not all types would lose precision, but some will).
Without the defaults this becomes a much bigger request, as I'd be asking _every_ author of Floating to add CPP to their packages for a feature they never heard of and probably will never use, and in that situation we'd have to export it from Prelude.
It's perfectly fine to leave some methods blank; IMHO the resulting run-time error is better than an incorrect default. Plus, it's useful for library authors to know that the class has changed; if a default is provided everything will build properly and there is no compile-time indication that library authors should adjust their code.
Originally I was +1 for everything except the defaults, but I'm reconsidering. If this is something that most Floating instance authors don't know about and probably won't ever use, do these functions really belong in that class? Why not make a separate class for fused algorithms?
John L.
-Edward
On Sat, Apr 19, 2014 at 5:42 AM, Scott Turner <2haskell@pkturner.org>wrote:
On 2014-04-17 15:08, Edward Kmett wrote:
On Thu, Apr 17, 2014 at 2:48 PM, Henning Thielemann < schlepptop@henning-thielemann.de> wrote:
I think one should add default implementations. They don't have an numerical advantage but they save programmers from code breakage.
I included the default definitions in code snippet in the proposal, so user code that remains unaware of them would be unaffected, while packages like compensated, or a wrapper around libqd could implement them as needed.
expm1 :: Floating a => a -> a expm1 x = exp x - 1
log1p :: Floating a => a -> a log1p x = log (1 + x)
On the contrary, code that explicitly uses these functions is likely to need the precision. Defaults would cause subtle breakage.
-- Scott
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Sun, Apr 20, 2014 at 5:20 PM, Edward Kmett
The proposal is worded the way it is to get a strict monotonic improvement over the status quo.
With them in the class it becomes possible to get the instances fixed. With them outside of the class in some needless extra hair-splitting class added on later like we have to use today, then someone who would otherwise just use them is needlessly hoist on the dilemma of using a more restrictive class and just accepting the fact that they can't work with third party numeric types for the most part at all, or reverting to the poor version of the numerics to widen their audience.
This leads to the equivalent of needless divisions between 'traverse' vs. 'mapM' forever.
I don't think a separate class is ideal, I just think it's better than your original proposal.
With defaults you are never worse off than you are today, but defaults you *always* have to worry about whether you should use them.
This isn't correct. Today, I don't have exp1m. I have no expectation that any type will use an appropriate fused algorithm, nor that I won't lose precision. If exp1m is defined with defaults as proposed, and I use exp1m for a type that doesn't define it, I may lose precision, leading to compounding errors in my code, *even though I used the right function*.
Let's look at it another way.
By putting in defaults the costs of the proposal are borne by the people who want to use the new feature.
Yes. When users use the new feature with types that don't implement it and get an incorrect answer, there will certainly be high costs involved. Let's look at it another way. Do you want to track down bugs due to exp1m not implementing the appropriate fused algorithm? Or alternatively, do you want to implement a default function that's not even guaranteed to work as documented? With a silent failure mode? So library authors don't have to fix up their libraries? That seems very wrong to me.
Moreover, if we should decide to adopt wren's half-suggestion of continuing to expand support for other numerical primitives that have broad support we could do so without great deal of fanfare, and the handful of people who actually do numeric computation can talk to the handful of people who write numeric instances that high up the foodchain to get the important ones fixed in packages like vector-space, linear, diagrams, etc.
Without defaults everyone who ever wrote a Floating instance by hand would need to know about log1p or wren's log1mexp and they would be forced into using CPP in their code to work around a feature they don't care about and if they couldn't be bothered then the user who wanted a bit of extra precision now just starts crashing. The risk averse would simply take the path with worse precision or get shoved back into the world of code duplication and 'mapM' vs 'traverse'.
Are you arguing for a separate class? Because that's what it sounds like. Besides, if you aren't familiar with precision issues you have no business writing a Floating instance by hand that does anything more than lift over more fundamental types. I think it's better that exp1m crash than that it not give extra precision, since the extra precision is the whole point of the function. When I call a function, I want to get the function I mean. I'm not actually arguing for a separate class. I think these belong in Floating as well. I'm just arguing against a default that doesn't work as specified.
I know for me personally it would force me to double the amount of numeric code I write, just to maximize my audience. I really don't want to go there. I just want to be able to call the function I mean, and to be able to talk to the right people to make it do the right thing.
exp1m = error "Go bug some library author to implement exp1m" would accomplish that even more efficiently, since it will directly point users to the right people.
-Edward
On Sun, Apr 20, 2014 at 7:32 PM, John Lato
wrote: On Sat, Apr 19, 2014 at 5:49 AM, Edward Kmett
wrote: With the defaults the code is never worse than it is forced to be right now and users do not need to create CPP blocked code to work around this addition.
I usually like defaults, and avoiding CPP is good, however with the defaults users will expect better code than they get. We aren't doing anyone any favors by introducing the possibility of silent floating-point precision loss from 'exp1m'. An "error" default would be better.
Besides, the code would be worse than it's forced to be now. At least now users who care about this run headlong into the issue. If we provide exp1m and log1p, users who use those functions should get the advertised behavior, not loss of precision (I realize not all types would lose precision, but some will).
Without the defaults this becomes a much bigger request, as I'd be asking _every_ author of Floating to add CPP to their packages for a feature they never heard of and probably will never use, and in that situation we'd have to export it from Prelude.
It's perfectly fine to leave some methods blank; IMHO the resulting run-time error is better than an incorrect default. Plus, it's useful for library authors to know that the class has changed; if a default is provided everything will build properly and there is no compile-time indication that library authors should adjust their code.
Originally I was +1 for everything except the defaults, but I'm reconsidering. If this is something that most Floating instance authors don't know about and probably won't ever use, do these functions really belong in that class? Why not make a separate class for fused algorithms?
John L.
-Edward
On Sat, Apr 19, 2014 at 5:42 AM, Scott Turner <2haskell@pkturner.org>wrote:
On 2014-04-17 15:08, Edward Kmett wrote:
On Thu, Apr 17, 2014 at 2:48 PM, Henning Thielemann < schlepptop@henning-thielemann.de> wrote:
I think one should add default implementations. They don't have an numerical advantage but they save programmers from code breakage.
I included the default definitions in code snippet in the proposal, so user code that remains unaware of them would be unaffected, while packages like compensated, or a wrapper around libqd could implement them as needed.
expm1 :: Floating a => a -> a expm1 x = exp x - 1
log1p :: Floating a => a -> a log1p x = log (1 + x)
On the contrary, code that explicitly uses these functions is likely to need the precision. Defaults would cause subtle breakage.
-- Scott
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Sun, Apr 20, 2014 at 8:49 PM, John Lato
On Sun, Apr 20, 2014 at 5:20 PM, Edward Kmett
wrote: I know for me personally it would force me to double the amount of numeric code I write, just to maximize my audience. I really don't want to go there. I just want to be able to call the function I mean, and to be able to talk to the right people to make it do the right thing.
exp1m = error "Go bug some library author to implement exp1m"
would accomplish that even more efficiently, since it will directly point users to the right people.
FWIW, because log1p/expm1 etc are not part of the current API, this too is a strictly monotonic improvement over the current state of things: users who don't know/care about the new functions don't need to change anything, and users who do care are beholden to see that they are implemented correctly. No need for CPP. And no misleading implementations of the fused functions. I'm +1 for these defaults. -- Live well, ~wren

The problem there is that now you can't just replace log (1 + x) with log1p in your code, you have to reason through case by case about an open universe of instances you don't own, to know whether it is a safe transformation. I use type classes to quotient that sort of case by case reasoning out of my thinking.
On Apr 20, 2014, at 9:25 PM, wren romano
wrote: On Sun, Apr 20, 2014 at 8:49 PM, John Lato
wrote: On Sun, Apr 20, 2014 at 5:20 PM, Edward Kmett
wrote: I know for me personally it would force me to double the amount of numeric code I write, just to maximize my audience. I really don't want to go there. I just want to be able to call the function I mean, and to be able to talk to the right people to make it do the right thing. exp1m = error "Go bug some library author to implement exp1m"
would accomplish that even more efficiently, since it will directly point users to the right people.
FWIW, because log1p/expm1 etc are not part of the current API, this too is a strictly monotonic improvement over the current state of things: users who don't know/care about the new functions don't need to change anything, and users who do care are beholden to see that they are implemented correctly. No need for CPP. And no misleading implementations of the fused functions.
I'm +1 for these defaults.
-- Live well, ~wren _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

If you really want to push for uniform availability of the new precision and that everyone pays for supplying them, then we really should bite the bullet and export the full class with them from Prelude, let it warn library authors about missing implementations and just be done with it. I personally do not think the extra precision is worth the pain, and my guiding rule for 7.10 from a change perspective had been that the changes we are pushing out for AMP etc should be able to be worked around sans CPP, but I could be won over to that viewpoint. Regardless, upon reviewing all the numeric code I have that uses them, your original comment about log1pexp and log1mexp actually exposed some opportunities for better precision in my code. I'd like to expand the proposal to incorporate them. This would be absolutely painless under my variant, but if we're going down the other road, we should at least incur all the pain at once. They are at least all unary combinators like most everything in the class so whatever lifting the library author needs will likely be similar. Sent from my iPad
On Apr 20, 2014, at 9:25 PM, wren romano
wrote: On Sun, Apr 20, 2014 at 8:49 PM, John Lato
wrote: On Sun, Apr 20, 2014 at 5:20 PM, Edward Kmett
wrote: I know for me personally it would force me to double the amount of numeric code I write, just to maximize my audience. I really don't want to go there. I just want to be able to call the function I mean, and to be able to talk to the right people to make it do the right thing. exp1m = error "Go bug some library author to implement exp1m"
would accomplish that even more efficiently, since it will directly point users to the right people.
FWIW, because log1p/expm1 etc are not part of the current API, this too is a strictly monotonic improvement over the current state of things: users who don't know/care about the new functions don't need to change anything, and users who do care are beholden to see that they are implemented correctly. No need for CPP. And no misleading implementations of the fused functions.
I'm +1 for these defaults.
-- Live well, ~wren _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

Sent from my iPad
On Apr 20, 2014, at 8:49 PM, John Lato
wrote: On Sun, Apr 20, 2014 at 5:20 PM, Edward Kmett
wrote: The proposal is worded the way it is to get a strict monotonic improvement over the status quo. With them in the class it becomes possible to get the instances fixed. With them outside of the class in some needless extra hair-splitting class added on later like we have to use today, then someone who would otherwise just use them is needlessly hoist on the dilemma of using a more restrictive class and just accepting the fact that they can't work with third party numeric types for the most part at all, or reverting to the poor version of the numerics to widen their audience.
This leads to the equivalent of needless divisions between 'traverse' vs. 'mapM' forever.
I don't think a separate class is ideal, I just think it's better than your original proposal.
I think reasonable people can disagree and come down on either side of this issue.
With defaults you are never worse off than you are today, but defaults you always have to worry about whether you should use them.
This isn't correct. Today, I don't have exp1m. I have no expectation that any type will use an appropriate fused algorithm, nor that I won't lose precision. If exp1m is defined with defaults as proposed, and I use exp1m for a type that doesn't define it, I may lose precision, leading to compounding errors in my code, *even though I used the right function*.
I'm coming at this from the perspective that I should never be worse off having called expm1 than I would be in the world before it existed. Your way I just crash making me much worse off. I'm asking for extra bits of precision if the type I'm using can offer them. Nothing more.
Let's look at it another way.
By putting in defaults the costs of the proposal are borne by the people who want to use the new feature.
Yes. When users use the new feature with types that don't implement it and get an incorrect answer, there will certainly be high costs involved.
Let's look at it another way.
Do you want to track down bugs due to exp1m not implementing the appropriate fused algorithm?
Or alternatively, do you want to implement a default function that's not even guaranteed to work as documented? With a silent failure mode? So library authors don't have to fix up their libraries? That seems very wrong to me.
Moreover, if we should decide to adopt wren's half-suggestion of continuing to expand support for other numerical primitives that have broad support we could do so without great deal of fanfare, and the handful of people who actually do numeric computation can talk to the handful of people who write numeric instances that high up the foodchain to get the important ones fixed in packages like vector-space, linear, diagrams, etc.
Without defaults everyone who ever wrote a Floating instance by hand would need to know about log1p or wren's log1mexp and they would be forced into using CPP in their code to work around a feature they don't care about and if they couldn't be bothered then the user who wanted a bit of extra precision now just starts crashing. The risk averse would simply take the path with worse precision or get shoved back into the world of code duplication and 'mapM' vs 'traverse'.
Are you arguing for a separate class? Because that's what it sounds like. Besides, if you aren't familiar with precision issues you have no business writing a Floating instance by hand that does anything more than lift over more fundamental types.
I think it's better that exp1m crash than that it not give extra precision, since the extra precision is the whole point of the function. When I call a function, I want to get the function I mean.
I'm not actually arguing for a separate class. I think these belong in Floating as well. I'm just arguing against a default that doesn't work as specified.
If expm1 crashes I'm back to duplicating code and this proposal does nothing to improve my life over doing exactly what I can do now, but do not wish to continue doing, which is maintain a separate code path entirely with no effective way to transparently switch when greater precision is available.
I know for me personally it would force me to double the amount of numeric code I write, just to maximize my audience. I really don't want to go there. I just want to be able to call the function I mean, and to be able to talk to the right people to make it do the right thing.
exp1m = error "Go bug some library author to implement exp1m"
would accomplish that even more efficiently, since it will directly point users to the right people.
And in exchange, ever library author even the vast majority of whom will never have a user who cares about this feature needs to care or get a warning or we silently cover up a real error that should be a warning behind their back, and no user can trust that it is safe to call the function. I don't want to duplicate all my code and I don't want to randomly crash, I want to eke out a few bits of mantissa if they are available and not be worse off than I am today for that privilege. -Edward
-Edward
On Sun, Apr 20, 2014 at 7:32 PM, John Lato
wrote: On Sat, Apr 19, 2014 at 5:49 AM, Edward Kmett
wrote: With the defaults the code is never worse than it is forced to be right now and users do not need to create CPP blocked code to work around this addition. I usually like defaults, and avoiding CPP is good, however with the defaults users will expect better code than they get. We aren't doing anyone any favors by introducing the possibility of silent floating-point precision loss from 'exp1m'. An "error" default would be better.
Besides, the code would be worse than it's forced to be now. At least now users who care about this run headlong into the issue. If we provide exp1m and log1p, users who use those functions should get the advertised behavior, not loss of precision (I realize not all types would lose precision, but some will).
Without the defaults this becomes a much bigger request, as I'd be asking _every_ author of Floating to add CPP to their packages for a feature they never heard of and probably will never use, and in that situation we'd have to export it from Prelude.
It's perfectly fine to leave some methods blank; IMHO the resulting run-time error is better than an incorrect default. Plus, it's useful for library authors to know that the class has changed; if a default is provided everything will build properly and there is no compile-time indication that library authors should adjust their code.
Originally I was +1 for everything except the defaults, but I'm reconsidering. If this is something that most Floating instance authors don't know about and probably won't ever use, do these functions really belong in that class? Why not make a separate class for fused algorithms?
John L.
-Edward
On Sat, Apr 19, 2014 at 5:42 AM, Scott Turner <2haskell@pkturner.org> wrote:
On 2014-04-17 15:08, Edward Kmett wrote: On Thu, Apr 17, 2014 at 2:48 PM, Henning Thielemann
wrote: > I think one should add default implementations. They don't have an numerical advantage but they save programmers from code breakage. I included the default definitions in code snippet in the proposal, so user code that remains unaware of them would be unaffected, while packages like compensated, or a wrapper around libqd could implement them as needed.
expm1 :: Floating a => a -> a expm1 x = exp x - 1
log1p :: Floating a => a -> a log1p x = log (1 + x) On the contrary, code that explicitly uses these functions is likely to need the precision. Defaults would cause subtle breakage.
-- Scott
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Sun, Apr 20, 2014 at 6:37 PM, Edward Kmett
Sent from my iPad
On Apr 20, 2014, at 8:49 PM, John Lato
wrote: On Sun, Apr 20, 2014 at 5:20 PM, Edward Kmett
wrote: The proposal is worded the way it is to get a strict monotonic improvement over the status quo.
With them in the class it becomes possible to get the instances fixed. With them outside of the class in some needless extra hair-splitting class added on later like we have to use today, then someone who would otherwise just use them is needlessly hoist on the dilemma of using a more restrictive class and just accepting the fact that they can't work with third party numeric types for the most part at all, or reverting to the poor version of the numerics to widen their audience.
This leads to the equivalent of needless divisions between 'traverse' vs. 'mapM' forever.
I don't think a separate class is ideal, I just think it's better than your original proposal.
I think reasonable people can disagree and come down on either side of this issue.
Sure.
With defaults you are never worse off than you are today, but defaults you
*always* have to worry about whether you should use them.
This isn't correct. Today, I don't have exp1m. I have no expectation that any type will use an appropriate fused algorithm, nor that I won't lose precision. If exp1m is defined with defaults as proposed, and I use exp1m for a type that doesn't define it, I may lose precision, leading to compounding errors in my code, *even though I used the right function*.
I'm coming at this from the perspective that I should never be worse off having called expm1 than I would be in the world before it existed. Your way I just crash making me much worse off. I'm asking for extra bits of precision if the type I'm using can offer them. Nothing more.
And I'm saying that if you ask for extra bits of precision, and the type could offer them but doesn't, a crash is better than not giving extra precision. FP algorithms can be highly sensitive to precision, and it's a good bet that if somebody is asking for specialized behavior there's a reason why. I think it's better to fail loudly and point a finger than to fail silently. If I'm using log1p because my algorithm requires that precision, replacing log1p with log (1+x) is not a safe transformation. But that's what your default instance does.
Let's look at it another way.
By putting in defaults the costs of the proposal are borne by the people who want to use the new feature.
Yes. When users use the new feature with types that don't implement it and get an incorrect answer, there will certainly be high costs involved.
Let's look at it another way.
Do you want to track down bugs due to exp1m not implementing the appropriate fused algorithm?
Or alternatively, do you want to implement a default function that's not even guaranteed to work as documented? With a silent failure mode? So library authors don't have to fix up their libraries? That seems very wrong to me.
Moreover, if we should decide to adopt wren's half-suggestion of continuing to expand support for other numerical primitives that have broad support we could do so without great deal of fanfare, and the handful of people who actually do numeric computation can talk to the handful of people who write numeric instances that high up the foodchain to get the important ones fixed in packages like vector-space, linear, diagrams, etc.
Without defaults everyone who ever wrote a Floating instance by hand would need to know about log1p or wren's log1mexp and they would be forced into using CPP in their code to work around a feature they don't care about and if they couldn't be bothered then the user who wanted a bit of extra precision now just starts crashing. The risk averse would simply take the path with worse precision or get shoved back into the world of code duplication and 'mapM' vs 'traverse'.
Are you arguing for a separate class? Because that's what it sounds like. Besides, if you aren't familiar with precision issues you have no business writing a Floating instance by hand that does anything more than lift over more fundamental types.
I think it's better that exp1m crash than that it not give extra precision, since the extra precision is the whole point of the function. When I call a function, I want to get the function I mean.
I'm not actually arguing for a separate class. I think these belong in Floating as well. I'm just arguing against a default that doesn't work as specified.
If expm1 crashes I'm back to duplicating code and this proposal does nothing to improve my life over doing exactly what I can do now, but do not wish to continue doing, which is maintain a separate code path entirely with no effective way to transparently switch when greater precision is available.
You wouldn't duplicate code. You would go to the author of the type that doesn't implement that function and ask them to implement it. Isn't that exactly what you said you wanted? To get the function you mean and know who to talk to in order to get it implemented? Your proposal doesn't even provide the function you mean! Also, I note you neglected to answer my rhetorical question :) Introducing bugs whereby functions don't behave according to standards is really, really poor design. I don't see how saving some library authors some work is worth that cost to users.
I know for me personally it would force me to double the amount of numeric code I write, just to maximize my audience. I really don't want to go there. I just want to be able to call the function I mean, and to be able to talk to the right people to make it do the right thing.
exp1m = error "Go bug some library author to implement exp1m"
would accomplish that even more efficiently, since it will directly point users to the right people.
And in exchange, ever library author even the vast majority of whom will never have a user who cares about this feature needs to care or get a warning or we silently cover up a real error that should be a warning behind their back, and no user can trust that it is safe to call the function.
I think just providing implementations for Float/Double will cover >90% of use cases and convince users that it's safe to call the function. GND will probably cover another 5-8% of uses. I think it's a quite small tail we're discussing here. And I'll even admit that, since for *some* types log1p x = log (1+x) will work correctly, it's an even smaller group of users I'm concerned about. But I still think it's an unreasonable price to pay.
I don't want to duplicate all my code and I don't want to randomly crash, I want to eke out a few bits of mantissa if they are available and not be worse off than I am today for that privilege.
If you wrote code that crashed under an error default, that same code would be worse off than it is today because users would expect that it does the right thing and it fails silently. The point of these functions isn't just to provide convenient algebraic shortcuts. It's to provide extra precision for numerically-sensitive computations. If users don't know about it, they'll just use exp/log and be ok. But users who require that extra precision should either get it or be informed that it's not available. Ideally by a compile-time error, but I don't know a reasonable way to implement that, so a run-time error is the next best thing. I simply do not understand why you think it's appropriate to provide a function that explicitly doesn't do what it's supposed to. But we're unlikely to sway each other here without further input, so I guess +0.1 to the OP +0.5 for error defaults +1 for no defaults John L.
-Edward
-Edward
On Sun, Apr 20, 2014 at 7:32 PM, John Lato
wrote: On Sat, Apr 19, 2014 at 5:49 AM, Edward Kmett
wrote: With the defaults the code is never worse than it is forced to be right now and users do not need to create CPP blocked code to work around this addition.
I usually like defaults, and avoiding CPP is good, however with the defaults users will expect better code than they get. We aren't doing anyone any favors by introducing the possibility of silent floating-point precision loss from 'exp1m'. An "error" default would be better.
Besides, the code would be worse than it's forced to be now. At least now users who care about this run headlong into the issue. If we provide exp1m and log1p, users who use those functions should get the advertised behavior, not loss of precision (I realize not all types would lose precision, but some will).
Without the defaults this becomes a much bigger request, as I'd be asking _every_ author of Floating to add CPP to their packages for a feature they never heard of and probably will never use, and in that situation we'd have to export it from Prelude.
It's perfectly fine to leave some methods blank; IMHO the resulting run-time error is better than an incorrect default. Plus, it's useful for library authors to know that the class has changed; if a default is provided everything will build properly and there is no compile-time indication that library authors should adjust their code.
Originally I was +1 for everything except the defaults, but I'm reconsidering. If this is something that most Floating instance authors don't know about and probably won't ever use, do these functions really belong in that class? Why not make a separate class for fused algorithms?
John L.
-Edward
On Sat, Apr 19, 2014 at 5:42 AM, Scott Turner <2haskell@pkturner.org>wrote:
On 2014-04-17 15:08, Edward Kmett wrote:
On Thu, Apr 17, 2014 at 2:48 PM, Henning Thielemann < schlepptop@henning-thielemann.de> wrote:
I think one should add default implementations. They don't have an numerical advantage but they save programmers from code breakage.
I included the default definitions in code snippet in the proposal, so user code that remains unaware of them would be unaffected, while packages like compensated, or a wrapper around libqd could implement them as needed.
expm1 :: Floating a => a -> a expm1 x = exp x - 1
log1p :: Floating a => a -> a log1p x = log (1 + x)
On the contrary, code that explicitly uses these functions is likely to need the precision. Defaults would cause subtle breakage.
-- Scott
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

doesn't no instance vs error "foo" defaults both only trigger an error at
runtime?
On Sun, Apr 20, 2014 at 10:49 PM, John Lato
On Sun, Apr 20, 2014 at 6:37 PM, Edward Kmett
wrote: Sent from my iPad
On Apr 20, 2014, at 8:49 PM, John Lato
wrote: On Sun, Apr 20, 2014 at 5:20 PM, Edward Kmett
wrote: The proposal is worded the way it is to get a strict monotonic improvement over the status quo.
With them in the class it becomes possible to get the instances fixed. With them outside of the class in some needless extra hair-splitting class added on later like we have to use today, then someone who would otherwise just use them is needlessly hoist on the dilemma of using a more restrictive class and just accepting the fact that they can't work with third party numeric types for the most part at all, or reverting to the poor version of the numerics to widen their audience.
This leads to the equivalent of needless divisions between 'traverse' vs. 'mapM' forever.
I don't think a separate class is ideal, I just think it's better than your original proposal.
I think reasonable people can disagree and come down on either side of this issue.
Sure.
With defaults you are never worse off than you are today, but defaults
you *always* have to worry about whether you should use them.
This isn't correct. Today, I don't have exp1m. I have no expectation that any type will use an appropriate fused algorithm, nor that I won't lose precision. If exp1m is defined with defaults as proposed, and I use exp1m for a type that doesn't define it, I may lose precision, leading to compounding errors in my code, *even though I used the right function*.
I'm coming at this from the perspective that I should never be worse off having called expm1 than I would be in the world before it existed. Your way I just crash making me much worse off. I'm asking for extra bits of precision if the type I'm using can offer them. Nothing more.
And I'm saying that if you ask for extra bits of precision, and the type could offer them but doesn't, a crash is better than not giving extra precision. FP algorithms can be highly sensitive to precision, and it's a good bet that if somebody is asking for specialized behavior there's a reason why. I think it's better to fail loudly and point a finger than to fail silently.
If I'm using log1p because my algorithm requires that precision, replacing log1p with log (1+x) is not a safe transformation. But that's what your default instance does.
Let's look at it another way.
By putting in defaults the costs of the proposal are borne by the people who want to use the new feature.
Yes. When users use the new feature with types that don't implement it and get an incorrect answer, there will certainly be high costs involved.
Let's look at it another way.
Do you want to track down bugs due to exp1m not implementing the appropriate fused algorithm?
Or alternatively, do you want to implement a default function that's not even guaranteed to work as documented? With a silent failure mode? So library authors don't have to fix up their libraries? That seems very wrong to me.
Moreover, if we should decide to adopt wren's half-suggestion of continuing to expand support for other numerical primitives that have broad support we could do so without great deal of fanfare, and the handful of people who actually do numeric computation can talk to the handful of people who write numeric instances that high up the foodchain to get the important ones fixed in packages like vector-space, linear, diagrams, etc.
Without defaults everyone who ever wrote a Floating instance by hand would need to know about log1p or wren's log1mexp and they would be forced into using CPP in their code to work around a feature they don't care about and if they couldn't be bothered then the user who wanted a bit of extra precision now just starts crashing. The risk averse would simply take the path with worse precision or get shoved back into the world of code duplication and 'mapM' vs 'traverse'.
Are you arguing for a separate class? Because that's what it sounds like. Besides, if you aren't familiar with precision issues you have no business writing a Floating instance by hand that does anything more than lift over more fundamental types.
I think it's better that exp1m crash than that it not give extra precision, since the extra precision is the whole point of the function. When I call a function, I want to get the function I mean.
I'm not actually arguing for a separate class. I think these belong in Floating as well. I'm just arguing against a default that doesn't work as specified.
If expm1 crashes I'm back to duplicating code and this proposal does nothing to improve my life over doing exactly what I can do now, but do not wish to continue doing, which is maintain a separate code path entirely with no effective way to transparently switch when greater precision is available.
You wouldn't duplicate code. You would go to the author of the type that doesn't implement that function and ask them to implement it. Isn't that exactly what you said you wanted? To get the function you mean and know who to talk to in order to get it implemented? Your proposal doesn't even provide the function you mean!
Also, I note you neglected to answer my rhetorical question :) Introducing bugs whereby functions don't behave according to standards is really, really poor design. I don't see how saving some library authors some work is worth that cost to users.
I know for me personally it would force me to double the amount of numeric code I write, just to maximize my audience. I really don't want to go there. I just want to be able to call the function I mean, and to be able to talk to the right people to make it do the right thing.
exp1m = error "Go bug some library author to implement exp1m"
would accomplish that even more efficiently, since it will directly point users to the right people.
And in exchange, ever library author even the vast majority of whom will never have a user who cares about this feature needs to care or get a warning or we silently cover up a real error that should be a warning behind their back, and no user can trust that it is safe to call the function.
I think just providing implementations for Float/Double will cover >90% of use cases and convince users that it's safe to call the function. GND will probably cover another 5-8% of uses. I think it's a quite small tail we're discussing here. And I'll even admit that, since for *some* types log1p x = log (1+x) will work correctly, it's an even smaller group of users I'm concerned about. But I still think it's an unreasonable price to pay.
I don't want to duplicate all my code and I don't want to randomly crash, I want to eke out a few bits of mantissa if they are available and not be worse off than I am today for that privilege.
If you wrote code that crashed under an error default, that same code would be worse off than it is today because users would expect that it does the right thing and it fails silently.
The point of these functions isn't just to provide convenient algebraic shortcuts. It's to provide extra precision for numerically-sensitive computations. If users don't know about it, they'll just use exp/log and be ok. But users who require that extra precision should either get it or be informed that it's not available. Ideally by a compile-time error, but I don't know a reasonable way to implement that, so a run-time error is the next best thing.
I simply do not understand why you think it's appropriate to provide a function that explicitly doesn't do what it's supposed to. But we're unlikely to sway each other here without further input, so I guess
+0.1 to the OP +0.5 for error defaults +1 for no defaults
John L.
-Edward
-Edward
On Sun, Apr 20, 2014 at 7:32 PM, John Lato
wrote: On Sat, Apr 19, 2014 at 5:49 AM, Edward Kmett
wrote: With the defaults the code is never worse than it is forced to be right now and users do not need to create CPP blocked code to work around this addition.
I usually like defaults, and avoiding CPP is good, however with the defaults users will expect better code than they get. We aren't doing anyone any favors by introducing the possibility of silent floating-point precision loss from 'exp1m'. An "error" default would be better.
Besides, the code would be worse than it's forced to be now. At least now users who care about this run headlong into the issue. If we provide exp1m and log1p, users who use those functions should get the advertised behavior, not loss of precision (I realize not all types would lose precision, but some will).
Without the defaults this becomes a much bigger request, as I'd be asking _every_ author of Floating to add CPP to their packages for a feature they never heard of and probably will never use, and in that situation we'd have to export it from Prelude.
It's perfectly fine to leave some methods blank; IMHO the resulting run-time error is better than an incorrect default. Plus, it's useful for library authors to know that the class has changed; if a default is provided everything will build properly and there is no compile-time indication that library authors should adjust their code.
Originally I was +1 for everything except the defaults, but I'm reconsidering. If this is something that most Floating instance authors don't know about and probably won't ever use, do these functions really belong in that class? Why not make a separate class for fused algorithms?
John L.
-Edward
On Sat, Apr 19, 2014 at 5:42 AM, Scott Turner <2haskell@pkturner.org>wrote:
On 2014-04-17 15:08, Edward Kmett wrote:
On Thu, Apr 17, 2014 at 2:48 PM, Henning Thielemann < schlepptop@henning-thielemann.de> wrote:
> I think one should add default implementations. They don't have an > numerical advantage but they save programmers from code breakage.
I included the default definitions in code snippet in the proposal, so user code that remains unaware of them would be unaffected, while packages like compensated, or a wrapper around libqd could implement them as needed.
expm1 :: Floating a => a -> a expm1 x = exp x - 1
log1p :: Floating a => a -> a log1p x = log (1 + x)
On the contrary, code that explicitly uses these functions is likely to need the precision. Defaults would cause subtle breakage.
-- Scott
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Sun, Apr 20, 2014 at 8:12 PM, Carter Schonwald < carter.schonwald@gmail.com> wrote:
doesn't no instance vs error "foo" defaults both only trigger an error at runtime?
Yes, but with no instance you get better error messages, since it will specify the offending type. It also triggers a compile-time warning for the library author, which is the real reason I think it's better.
On Sun, Apr 20, 2014 at 10:49 PM, John Lato
wrote: On Sun, Apr 20, 2014 at 6:37 PM, Edward Kmett
wrote: Sent from my iPad
On Apr 20, 2014, at 8:49 PM, John Lato
wrote: On Sun, Apr 20, 2014 at 5:20 PM, Edward Kmett
wrote: The proposal is worded the way it is to get a strict monotonic improvement over the status quo.
With them in the class it becomes possible to get the instances fixed. With them outside of the class in some needless extra hair-splitting class added on later like we have to use today, then someone who would otherwise just use them is needlessly hoist on the dilemma of using a more restrictive class and just accepting the fact that they can't work with third party numeric types for the most part at all, or reverting to the poor version of the numerics to widen their audience.
This leads to the equivalent of needless divisions between 'traverse' vs. 'mapM' forever.
I don't think a separate class is ideal, I just think it's better than your original proposal.
I think reasonable people can disagree and come down on either side of this issue.
Sure.
With defaults you are never worse off than you are today, but defaults
you *always* have to worry about whether you should use them.
This isn't correct. Today, I don't have exp1m. I have no expectation that any type will use an appropriate fused algorithm, nor that I won't lose precision. If exp1m is defined with defaults as proposed, and I use exp1m for a type that doesn't define it, I may lose precision, leading to compounding errors in my code, *even though I used the right function*.
I'm coming at this from the perspective that I should never be worse off having called expm1 than I would be in the world before it existed. Your way I just crash making me much worse off. I'm asking for extra bits of precision if the type I'm using can offer them. Nothing more.
And I'm saying that if you ask for extra bits of precision, and the type could offer them but doesn't, a crash is better than not giving extra precision. FP algorithms can be highly sensitive to precision, and it's a good bet that if somebody is asking for specialized behavior there's a reason why. I think it's better to fail loudly and point a finger than to fail silently.
If I'm using log1p because my algorithm requires that precision, replacing log1p with log (1+x) is not a safe transformation. But that's what your default instance does.
Let's look at it another way.
By putting in defaults the costs of the proposal are borne by the people who want to use the new feature.
Yes. When users use the new feature with types that don't implement it and get an incorrect answer, there will certainly be high costs involved.
Let's look at it another way.
Do you want to track down bugs due to exp1m not implementing the appropriate fused algorithm?
Or alternatively, do you want to implement a default function that's not even guaranteed to work as documented? With a silent failure mode? So library authors don't have to fix up their libraries? That seems very wrong to me.
Moreover, if we should decide to adopt wren's half-suggestion of continuing to expand support for other numerical primitives that have broad support we could do so without great deal of fanfare, and the handful of people who actually do numeric computation can talk to the handful of people who write numeric instances that high up the foodchain to get the important ones fixed in packages like vector-space, linear, diagrams, etc.
Without defaults everyone who ever wrote a Floating instance by hand would need to know about log1p or wren's log1mexp and they would be forced into using CPP in their code to work around a feature they don't care about and if they couldn't be bothered then the user who wanted a bit of extra precision now just starts crashing. The risk averse would simply take the path with worse precision or get shoved back into the world of code duplication and 'mapM' vs 'traverse'.
Are you arguing for a separate class? Because that's what it sounds like. Besides, if you aren't familiar with precision issues you have no business writing a Floating instance by hand that does anything more than lift over more fundamental types.
I think it's better that exp1m crash than that it not give extra precision, since the extra precision is the whole point of the function. When I call a function, I want to get the function I mean.
I'm not actually arguing for a separate class. I think these belong in Floating as well. I'm just arguing against a default that doesn't work as specified.
If expm1 crashes I'm back to duplicating code and this proposal does nothing to improve my life over doing exactly what I can do now, but do not wish to continue doing, which is maintain a separate code path entirely with no effective way to transparently switch when greater precision is available.
You wouldn't duplicate code. You would go to the author of the type that doesn't implement that function and ask them to implement it. Isn't that exactly what you said you wanted? To get the function you mean and know who to talk to in order to get it implemented? Your proposal doesn't even provide the function you mean!
Also, I note you neglected to answer my rhetorical question :) Introducing bugs whereby functions don't behave according to standards is really, really poor design. I don't see how saving some library authors some work is worth that cost to users.
I know for me personally it would force me to double the amount of numeric code I write, just to maximize my audience. I really don't want to go there. I just want to be able to call the function I mean, and to be able to talk to the right people to make it do the right thing.
exp1m = error "Go bug some library author to implement exp1m"
would accomplish that even more efficiently, since it will directly point users to the right people.
And in exchange, ever library author even the vast majority of whom will never have a user who cares about this feature needs to care or get a warning or we silently cover up a real error that should be a warning behind their back, and no user can trust that it is safe to call the function.
I think just providing implementations for Float/Double will cover >90% of use cases and convince users that it's safe to call the function. GND will probably cover another 5-8% of uses. I think it's a quite small tail we're discussing here. And I'll even admit that, since for *some* types log1p x = log (1+x) will work correctly, it's an even smaller group of users I'm concerned about. But I still think it's an unreasonable price to pay.
I don't want to duplicate all my code and I don't want to randomly crash, I want to eke out a few bits of mantissa if they are available and not be worse off than I am today for that privilege.
If you wrote code that crashed under an error default, that same code would be worse off than it is today because users would expect that it does the right thing and it fails silently.
The point of these functions isn't just to provide convenient algebraic shortcuts. It's to provide extra precision for numerically-sensitive computations. If users don't know about it, they'll just use exp/log and be ok. But users who require that extra precision should either get it or be informed that it's not available. Ideally by a compile-time error, but I don't know a reasonable way to implement that, so a run-time error is the next best thing.
I simply do not understand why you think it's appropriate to provide a function that explicitly doesn't do what it's supposed to. But we're unlikely to sway each other here without further input, so I guess
+0.1 to the OP +0.5 for error defaults +1 for no defaults
John L.
-Edward
-Edward
On Sun, Apr 20, 2014 at 7:32 PM, John Lato
wrote: On Sat, Apr 19, 2014 at 5:49 AM, Edward Kmett
wrote: With the defaults the code is never worse than it is forced to be right now and users do not need to create CPP blocked code to work around this addition.
I usually like defaults, and avoiding CPP is good, however with the defaults users will expect better code than they get. We aren't doing anyone any favors by introducing the possibility of silent floating-point precision loss from 'exp1m'. An "error" default would be better.
Besides, the code would be worse than it's forced to be now. At least now users who care about this run headlong into the issue. If we provide exp1m and log1p, users who use those functions should get the advertised behavior, not loss of precision (I realize not all types would lose precision, but some will).
Without the defaults this becomes a much bigger request, as I'd be asking _every_ author of Floating to add CPP to their packages for a feature they never heard of and probably will never use, and in that situation we'd have to export it from Prelude.
It's perfectly fine to leave some methods blank; IMHO the resulting run-time error is better than an incorrect default. Plus, it's useful for library authors to know that the class has changed; if a default is provided everything will build properly and there is no compile-time indication that library authors should adjust their code.
Originally I was +1 for everything except the defaults, but I'm reconsidering. If this is something that most Floating instance authors don't know about and probably won't ever use, do these functions really belong in that class? Why not make a separate class for fused algorithms?
John L.
-Edward
On Sat, Apr 19, 2014 at 5:42 AM, Scott Turner <2haskell@pkturner.org>wrote:
> On 2014-04-17 15:08, Edward Kmett wrote: > > On Thu, Apr 17, 2014 at 2:48 PM, Henning Thielemann < > schlepptop@henning-thielemann.de> wrote: > >> I think one should add default implementations. They don't have an >> numerical advantage but they save programmers from code breakage. > > > I included the default definitions in code snippet in the > proposal, so user code that remains unaware of them would be unaffected, > while packages like compensated, or a wrapper around libqd could implement > them as needed. > > expm1 :: Floating a => a -> a > expm1 x = exp x - 1 > > log1p :: Floating a => a -> a > log1p x = log (1 + x) > > On the contrary, code that explicitly uses these functions is likely > to need the precision. Defaults would cause subtle breakage. > > -- Scott >
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Sun, Apr 20, 2014 at 10:49 PM, John Lato
On Sun, Apr 20, 2014 at 6:37 PM, Edward Kmett
wrote: On Apr 20, 2014, at 8:49 PM, John Lato
wrote: With defaults you are never worse off than you are today, but defaults
you *always* have to worry about whether you should use them.
This isn't correct. Today, I don't have exp1m. I have no expectation that any type will use an appropriate fused algorithm, nor that I won't lose precision. If exp1m is defined with defaults as proposed, and I use exp1m for a type that doesn't define it, I may lose precision, leading to compounding errors in my code, *even though I used the right function*.
I'm coming at this from the perspective that I should never be worse off having called expm1 than I would be in the world before it existed. Your way I just crash making me much worse off. I'm asking for extra bits of precision if the type I'm using can offer them. Nothing more.
And I'm saying that if you ask for extra bits of precision, and the type could offer them but doesn't, a crash is better than not giving extra precision. FP algorithms can be highly sensitive to precision, and it's a good bet that if somebody is asking for specialized behavior there's a reason why. I think it's better to fail loudly and point a finger than to fail silently.
If I'm using log1p because my algorithm requires that precision, replacing log1p with log (1+x) is not a safe transformation. But that's what your default instance does.
I use log1p as a better log (1 + x). It lets me pick up a few decimal places worth of accuracy opportunistically.
Let's look at it another way.
By putting in defaults the costs of the proposal are borne by the people who want to use the new feature.
Yes. When users use the new feature with types that don't implement it and get an incorrect answer, there will certainly be high costs involved.
Let's look at it another way.
Do you want to track down bugs due to exp1m not implementing the appropriate fused algorithm?
Or alternatively, do you want to implement a default function that's not even guaranteed to work as documented? With a silent failure mode? So library authors don't have to fix up their libraries? That seems very wrong to me.
Moreover, if we should decide to adopt wren's half-suggestion of continuing to expand support for other numerical primitives that have broad support we could do so without great deal of fanfare, and the handful of people who actually do numeric computation can talk to the handful of people who write numeric instances that high up the foodchain to get the important ones fixed in packages like vector-space, linear, diagrams, etc.
Without defaults everyone who ever wrote a Floating instance by hand would need to know about log1p or wren's log1mexp and they would be forced into using CPP in their code to work around a feature they don't care about and if they couldn't be bothered then the user who wanted a bit of extra precision now just starts crashing. The risk averse would simply take the path with worse precision or get shoved back into the world of code duplication and 'mapM' vs 'traverse'.
Are you arguing for a separate class? Because that's what it sounds like. Besides, if you aren't familiar with precision issues you have no business writing a Floating instance by hand that does anything more than lift over more fundamental types.
I think it's better that exp1m crash than that it not give extra precision, since the extra precision is the whole point of the function. When I call a function, I want to get the function I mean.
I'm not actually arguing for a separate class. I think these belong in Floating as well. I'm just arguing against a default that doesn't work as specified.
If expm1 crashes I'm back to duplicating code and this proposal does nothing to improve my life over doing exactly what I can do now, but do not wish to continue doing, which is maintain a separate code path entirely with no effective way to transparently switch when greater precision is available.
You wouldn't duplicate code. You would go to the author of the type that doesn't implement that function and ask them to implement it. Isn't that exactly what you said you wanted? To get the function you mean and know who to talk to in order to get it implemented? Your proposal doesn't even provide the function you mean!
I have code that works with Floating today for everything. It just works poorly. I have other code that works with a custom class that works for a very limited sets of types that works accurately. I can have both of these things even in the absence of this proposal. If expm1 goes into the class but randomly crashes then I am _worse_ off than I am today, because before I at least had the class to tell me if it would work.
Also, I note you neglected to answer my rhetorical question :) Introducing bugs whereby functions don't behave according to standards is really, really poor design. I don't see how saving some library authors some work is worth that cost to users.
As it was a rhetorical question, I figured it didn't seek an actual reply. =P =)
I know for me personally it would force me to double the amount of
numeric code I write, just to maximize my audience. I really don't want to go there. I just want to be able to call the function I mean, and to be able to talk to the right people to make it do the right thing.
exp1m = error "Go bug some library author to implement exp1m"
would accomplish that even more efficiently, since it will directly point users to the right people.
And in exchange, ever library author even the vast majority of whom will never have a user who cares about this feature needs to care or get a warning or we silently cover up a real error that should be a warning behind their back, and no user can trust that it is safe to call the function.
I think just providing implementations for Float/Double will cover >90% of use cases and convince users that it's safe to call the function. GND will probably cover another 5-8% of uses. I think it's a quite small tail we're discussing here. And I'll even admit that, since for *some* types log1p x = log (1+x) will work correctly, it's an even smaller group of users I'm concerned about. But I still think it's an unreasonable price to pay.
The main cases that are left over are libraries like linear or vector-space that provide floating instances for vector space types, and the Complex case which I explicitly covered in the original proposal. Given that, your numbers leave ~2% of the libraries that would have no worse performance than they have today and which can be talked to individually, vs. a situation where I have to live in fear of calling a combinator for fear that I'll bottom out in my code with no way to detect it until runtime.
I don't want to duplicate all my code and I don't want to randomly crash,
I want to eke out a few bits of mantissa if they are available and not be worse off than I am today for that privilege.
If you wrote code that crashed under an error default, that same code would be worse off than it is today because users would expect that it does the right thing and it fails silently.
The point of these functions isn't just to provide convenient algebraic shortcuts. It's to provide extra precision for numerically-sensitive computations. If users don't know about it, they'll just use exp/log and be ok. But users who require that extra precision should either get it or be informed that it's not available. Ideally by a compile-time error, but I don't know a reasonable way to implement that, so a run-time error is the next best thing.
I simply do not understand why you think it's appropriate to provide a function that explicitly doesn't do what it's supposed to. But we're unlikely to sway each other here without further input, so I guess
If we're hiding this from the end user, in an extension most users never have to see that is exported by Numeric, I'm pretty strongly against having the defaults fail. If we choose to instead to suck it up and expose them in the class from Prelude, and just add the members directly, sans default, and make every one deal with them, then we're at least being fully honest and you can have your perfect precision and we can know that everyone has considered the case or been bugged about it by GHC.
+0.1 to the OP +0.5 for error defaults +1 for no defaults
I'm +1 for the OP. I'm +0.5 for no defaults, with a full export from the Prelude. I'm -1 on error defaults, as it leaves me worse off than I am today. -Edward

On Sun, Apr 20, 2014 at 8:25 PM, Edward Kmett
On Sun, Apr 20, 2014 at 10:49 PM, John Lato
wrote: On Sun, Apr 20, 2014 at 6:37 PM, Edward Kmett
wrote: On Apr 20, 2014, at 8:49 PM, John Lato
wrote: With defaults you are never worse off than you are today, but defaults
you *always* have to worry about whether you should use them.
This isn't correct. Today, I don't have exp1m. I have no expectation that any type will use an appropriate fused algorithm, nor that I won't lose precision. If exp1m is defined with defaults as proposed, and I use exp1m for a type that doesn't define it, I may lose precision, leading to compounding errors in my code, *even though I used the right function*.
I'm coming at this from the perspective that I should never be worse off having called expm1 than I would be in the world before it existed. Your way I just crash making me much worse off. I'm asking for extra bits of precision if the type I'm using can offer them. Nothing more.
And I'm saying that if you ask for extra bits of precision, and the type could offer them but doesn't, a crash is better than not giving extra precision. FP algorithms can be highly sensitive to precision, and it's a good bet that if somebody is asking for specialized behavior there's a reason why. I think it's better to fail loudly and point a finger than to fail silently.
If I'm using log1p because my algorithm requires that precision, replacing log1p with log (1+x) is not a safe transformation. But that's what your default instance does.
I use log1p as a better log (1 + x).
It lets me pick up a few decimal places worth of accuracy opportunistically.
That doesn't address my point, which is that for users who call log1p, replacing it with log (1+x) is not safe. Providing some opportunistic accuracy seems less important than giving wrong answers to other users. I know for me personally it would force me to double the amount of
numeric code I write, just to maximize my audience. I really don't want to go there. I just want to be able to call the function I mean, and to be able to talk to the right people to make it do the right thing.
exp1m = error "Go bug some library author to implement exp1m"
would accomplish that even more efficiently, since it will directly point users to the right people.
And in exchange, ever library author even the vast majority of whom will never have a user who cares about this feature needs to care or get a warning or we silently cover up a real error that should be a warning behind their back, and no user can trust that it is safe to call the function.
I think just providing implementations for Float/Double will cover >90% of use cases and convince users that it's safe to call the function. GND will probably cover another 5-8% of uses. I think it's a quite small tail we're discussing here. And I'll even admit that, since for *some* types log1p x = log (1+x) will work correctly, it's an even smaller group of users I'm concerned about. But I still think it's an unreasonable price to pay.
The main cases that are left over are libraries like linear or vector-space that provide floating instances for vector space types, and the Complex case which I explicitly covered in the original proposal.
Given that, your numbers leave ~2% of the libraries that would have no worse performance than they have today and which can be talked to individually, vs. a situation where I have to live in fear of calling a combinator for fear that I'll bottom out in my code with no way to detect it until runtime.
You keep saying that these cases are no worse off than they are today. As I see it, today everything works as expected, and under this proposal they'll have a function that gives an incorrect answer. Wouldn't that make them worse off? John

I was just wondering, why not simply numerically robust algorithms as defaults for these functions? No crashes, no errors, no loss of precision, everything would just work. They aren't particularly complicated, so the performance should even be reasonable. John L.

On 21 April 2014 09:38, John Lato
I was just wondering, why not simply numerically robust algorithms as defaults for these functions? No crashes, no errors, no loss of precision, everything would just work. They aren't particularly complicated, so the performance should even be reasonable.
I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer

I didn't want to make everyone pay for this feature, but as a number of folks feel strongly about it, then let's explore the following modified concrete proposal as an alternative. * Add log1p, expm1, log1pexp, log1mexp to Floating with the defaults, but extend the MINIMAL pragma for Floating to include them, so everyone gets told to implement them. * Just export the additional functions from Prelude like everything else in Floating. Why incorporate log1pexp, log1mexp? They have to be in the class or can't be properly implemented pointwise for vector spaces. Pros: * Anyone who doesn't implement them get warnings during their builds. * Code can reliably upgrade to expm1 and log1p and not be worse off than today. * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime. * The costs are mostly borne by folks who write libraries for things like linear algebra or AD, who are the very kinds of people who would care about these additions in the first pace. Cons: * Folks who do more than just GenericNewtypeDeriving for Floating needs to work around the addition of these using CPP or get spammed with warnings, but they are real warnings about needless loss of precision. I'd almost prefer this to the original proposal, but it impacts more people, so I could be swayed either way. -Edward On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov < alexey.skladnoy@gmail.com> wrote:
I was just wondering, why not simply numerically robust algorithms as defaults for these functions? No crashes, no errors, no loss of
On 21 April 2014 09:38, John Lato
wrote: precision, everything would just work. They aren't particularly complicated, so the performance should even be reasonable.
I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer

There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
* If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal. John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov < alexey.skladnoy@gmail.com> wrote:
I was just wondering, why not simply numerically robust algorithms as defaults for these functions? No crashes, no errors, no loss of
everything would just work. They aren't particularly complicated, so
On 21 April 2014 09:38, John Lato
wrote: precision, the performance should even be reasonable.
I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer

This does work.
MINIMAL is checked based on the definitions supplied locally in the
instance, not based on the total definitions that contribute to the
instance.
Otherwise we couldn't have the very poster-chid example of this from the
documentation for MINIMAL
class Eq a where
(==) :: a -> a -> Bool
(/=) :: a -> a -> Bool
x == y = not (x /= y)
x /= y = not (x == y)
{-# MINIMAL (==) | (/=) #-}
On Wed, Apr 23, 2014 at 7:57 PM, John Lato
There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
wrote: * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov < alexey.skladnoy@gmail.com> wrote:
I was just wondering, why not simply numerically robust algorithms as defaults for these functions? No crashes, no errors, no loss of
everything would just work. They aren't particularly complicated, so
On 21 April 2014 09:38, John Lato
wrote: precision, the performance should even be reasonable.
I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer

I think you may have interpreted me as saying something I didn't try to say.
To clarify, what I was indicating was that during the compilation of your
'DodgyFloat' supplying package a bunch of warnings about unimplemented
methods would scroll by.
-Edward
On Wed, Apr 23, 2014 at 8:06 PM, Edward Kmett
This does work.
MINIMAL is checked based on the definitions supplied locally in the instance, not based on the total definitions that contribute to the instance.
Otherwise we couldn't have the very poster-chid example of this from the documentation for MINIMAL
class Eq a where (==) :: a -> a -> Bool (/=) :: a -> a -> Bool x == y = not (x /= y) x /= y = not (x == y) {-# MINIMAL (==) | (/=) #-}
On Wed, Apr 23, 2014 at 7:57 PM, John Lato
wrote: There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
wrote: * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov < alexey.skladnoy@gmail.com> wrote:
I was just wondering, why not simply numerically robust algorithms as defaults for these functions? No crashes, no errors, no loss of
everything would just work. They aren't particularly complicated, so
On 21 April 2014 09:38, John Lato
wrote: precision, the performance should even be reasonable.
I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer

Ah. Indeed, that was not what I thought you meant. But the user may not
be compiling DodgyFloat; it may be provided via apt/rpm or similar.
Could you clarify one other thing? Do you think that \x -> log (1+x)
behaves the same as log1p?
On Wed, Apr 23, 2014 at 5:08 PM, Edward Kmett
I think you may have interpreted me as saying something I didn't try to say.
To clarify, what I was indicating was that during the compilation of your 'DodgyFloat' supplying package a bunch of warnings about unimplemented methods would scroll by.
-Edward
On Wed, Apr 23, 2014 at 8:06 PM, Edward Kmett
wrote: This does work.
MINIMAL is checked based on the definitions supplied locally in the instance, not based on the total definitions that contribute to the instance.
Otherwise we couldn't have the very poster-chid example of this from the documentation for MINIMAL
class Eq a where (==) :: a -> a -> Bool (/=) :: a -> a -> Bool x == y = not (x /= y) x /= y = not (x == y) {-# MINIMAL (==) | (/=) #-}
On Wed, Apr 23, 2014 at 7:57 PM, John Lato
wrote: There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
wrote: * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov < alexey.skladnoy@gmail.com> wrote:
I was just wondering, why not simply numerically robust algorithms as defaults for these functions? No crashes, no errors, no loss of
On 21 April 2014 09:38, John Lato
wrote: precision, everything would just work. They aren't particularly complicated, so the performance should even be reasonable.
I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer

On Wed, Apr 23, 2014 at 8:16 PM, John Lato
Ah. Indeed, that was not what I thought you meant. But the user may not be compiling DodgyFloat; it may be provided via apt/rpm or similar.
That is a fair point.
Could you clarify one other thing? Do you think that \x -> log (1+x) behaves the same as log1p?
I believe that \x -> log (1 + x) is a passable approximation of log1p in the absence of a better alternative and that I'd rather have the user get something no worse than they get today if they refactored their code to take advantage of the extra capability we are exposing, than just wind up in a situation where they have to choose between trying to use it because the types say they should be able to call it and getting unexpected bottoms they can't protect against, so that the new functionality can't be used in a way that can be detected at compile time. At this point we're just going around in circles. Under your version of things we put them into a class in a way that everyone has to pay for it, and nobody including me gets to have enough faith that it won't crash when invoked to actually call it. -Edward
On Wed, Apr 23, 2014 at 5:08 PM, Edward Kmett
wrote: I think you may have interpreted me as saying something I didn't try to say.
To clarify, what I was indicating was that during the compilation of your 'DodgyFloat' supplying package a bunch of warnings about unimplemented methods would scroll by.
-Edward
On Wed, Apr 23, 2014 at 8:06 PM, Edward Kmett
wrote: This does work.
MINIMAL is checked based on the definitions supplied locally in the instance, not based on the total definitions that contribute to the instance.
Otherwise we couldn't have the very poster-chid example of this from the documentation for MINIMAL
class Eq a where (==) :: a -> a -> Bool (/=) :: a -> a -> Bool x == y = not (x /= y) x /= y = not (x == y) {-# MINIMAL (==) | (/=) #-}
On Wed, Apr 23, 2014 at 7:57 PM, John Lato
wrote: There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
wrote: * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov < alexey.skladnoy@gmail.com> wrote:
On 21 April 2014 09:38, John Lato
wrote: > I was just wondering, why not simply numerically robust algorithms as > defaults for these functions? No crashes, no errors, no loss of precision, > everything would just work. They aren't particularly complicated, so the > performance should even be reasonable. > I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer

Let's try taking a step back here.
There are clearly two different audiences in mind here and that the
parameters for debate here are too narrow for us to achieve consensus.
Maybe we can try changing the problem a bit and see if we can get there by
another avenue.
Your audience would wants a claim that these functions do everything in
their power to preserve accuracy.
My audience wants to be able to opportunistically grab accuracy without
leaking it into the type and destroying the usability of their libraries
for the broadest set of users.
I essence here it is your audience is the one seeking an extra
guarantee/law.
Extra guarantees are the sort of thing one often denotes through a class.
However, putting them in a separate class destroys the utility of this
proposal for me.
As a straw-man / olive-branch / half-baked idea:
Could we get you what you want by simply making an extra class to indicate
the claim that the guarantee holds, and get what I want by placing these
methods in the existing Floating with the defaults?
I rather don't like that solution, but I'm just trying to broaden the scope
of debate, and at least expose where the fault lines lie in the design
space, and find a way for us to stop speaking past each other.
-Edward
On Wed, Apr 23, 2014 at 11:38 PM, Edward Kmett
On Wed, Apr 23, 2014 at 8:16 PM, John Lato
wrote: Ah. Indeed, that was not what I thought you meant. But the user may not be compiling DodgyFloat; it may be provided via apt/rpm or similar.
That is a fair point.
Could you clarify one other thing? Do you think that \x -> log (1+x) behaves the same as log1p?
I believe that \x -> log (1 + x) is a passable approximation of log1p in the absence of a better alternative and that I'd rather have the user get something no worse than they get today if they refactored their code to take advantage of the extra capability we are exposing, than just wind up in a situation where they have to choose between trying to use it because the types say they should be able to call it and getting unexpected bottoms they can't protect against, so that the new functionality can't be used in a way that can be detected at compile time.
At this point we're just going around in circles.
Under your version of things we put them into a class in a way that everyone has to pay for it, and nobody including me gets to have enough faith that it won't crash when invoked to actually call it.
-Edward
On Wed, Apr 23, 2014 at 5:08 PM, Edward Kmett
wrote: I think you may have interpreted me as saying something I didn't try to say.
To clarify, what I was indicating was that during the compilation of your 'DodgyFloat' supplying package a bunch of warnings about unimplemented methods would scroll by.
-Edward
On Wed, Apr 23, 2014 at 8:06 PM, Edward Kmett
wrote: This does work.
MINIMAL is checked based on the definitions supplied locally in the instance, not based on the total definitions that contribute to the instance.
Otherwise we couldn't have the very poster-chid example of this from the documentation for MINIMAL
class Eq a where (==) :: a -> a -> Bool (/=) :: a -> a -> Bool x == y = not (x /= y) x /= y = not (x == y) {-# MINIMAL (==) | (/=) #-}
On Wed, Apr 23, 2014 at 7:57 PM, John Lato
wrote: There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
wrote: * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov < alexey.skladnoy@gmail.com> wrote:
> On 21 April 2014 09:38, John Lato
wrote: > > I was just wondering, why not simply numerically robust algorithms > as > > defaults for these functions? No crashes, no errors, no loss of > precision, > > everything would just work. They aren't particularly complicated, > so the > > performance should even be reasonable. > > > I think it's best option. log1p and exp1m come with guarantees > about precision. log(1+p) default makes it impossible to depend in > such > guarantees. They will silenly give wrong answer >

On Wed, Apr 23, 2014 at 11:55 PM, Edward Kmett
As a straw-man / olive-branch / half-baked idea:
Could we get you what you want by simply making an extra class to indicate the claim that the guarantee holds, and get what I want by placing these methods in the existing Floating with the defaults?
Another alternative along the same lines and just as ugly: * Introduce a new class for these high-precision methods. * Define a `newtype Dodgy a = Dodgy a` which gives the obvious but not necessarily precise implementations. Pros: * we don't have a class without methods * the providence of improved precision is made clear in the type (pro jlato/myself) * can use the interface even without improved precision (pro ekmett) Cons: * newtype noise whenever fallback is necessary. Individual functions are not affected by this noise, however the noise may leak into the rest of the program (contra ekmett, at least in spirit) * the desire for improved precision leaks into the types (contra ekmett) * can use the interface even without improved precision (this may seem conta the spirit of my/jlato's goals, but since the fallback is documented in the types it is therefore okay by me) -- Live well, ~wren

I don't think that that works for my problem. In the end it just means that
I wind up opted out of using the feature, and forced to reason instance by
instance about if its safe. Worse its not just me stuck reasoning instance
by instance its the users of my libraries that could otherwise leverage
these benefits.
It is a passable solution for the end user who is writing an application
for a fixed instance or two, but as someone who produces a library consumed
by other users, we're just accumulating invariants that don't show up in
the types that they have to reason about, which strikes me as poor
engineering.
-Edward
On Thu, Apr 24, 2014 at 12:09 AM, wren romano
On Wed, Apr 23, 2014 at 11:55 PM, Edward Kmett
wrote: As a straw-man / olive-branch / half-baked idea:
Could we get you what you want by simply making an extra class to indicate the claim that the guarantee holds, and get what I want by placing these methods in the existing Floating with the defaults?
Another alternative along the same lines and just as ugly:
* Introduce a new class for these high-precision methods. * Define a `newtype Dodgy a = Dodgy a` which gives the obvious but not necessarily precise implementations.
Pros: * we don't have a class without methods * the providence of improved precision is made clear in the type (pro jlato/myself) * can use the interface even without improved precision (pro ekmett)
Cons: * newtype noise whenever fallback is necessary. Individual functions are not affected by this noise, however the noise may leak into the rest of the program (contra ekmett, at least in spirit) * the desire for improved precision leaks into the types (contra ekmett) * can use the interface even without improved precision (this may seem conta the spirit of my/jlato's goals, but since the fallback is documented in the types it is therefore okay by me)
-- Live well, ~wren _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

I expect the largest audience involved here are the group that doesn't
know or care about these functions, but definitely wants their code to
work.
As such, I'm opposed to anything that would break code that doesn't
need the benefits of these functions. Two specific scenarios come to
mind:
- Floating instances written for DSLs or AST-like types (the only
common example of Floating instances not already mentioned) needing
implementations of functions their author may not have heard of in
order to compile without warnings. This could be mitigated by good
documentation and providing "defaultFoo" functions suitable for
implementations that don't or can't do anything useful with these
functions anyway.
- Programmers who don't actually need the extra precision using these
functions anyway due to having a vague sense that they're "better".
Yes, in my experience this sort of thing is a common mentality among
programmers. Silently introducing runtime exceptions in this scenario
seems completely unacceptable to my mind and I'm strongly opposed to
any proposal involving that.
As far as I can see, together these rule out any proposal that would
directly add functions to an existing class unless default
implementations no worse than the status quo (in some sense) are
provided.
For default implementations, I would prefer the idea suggested earlier
of a (probably slower) algorithm that does preserve precision rather
than a naive version, if that's feasible. This lives up to the claimed
benefit of higher precision, and as a general rule I feel that any
pitfalls left for the unwary should at worst provide the correct
result more slowly.
Also, of all the people who might be impacted by this, I suspect that
the group who really do need both speed and precision for number
crunching are the most likely to know what they're doing and be aware
of potential pitfalls.
- C.
On Wed, Apr 23, 2014 at 11:55 PM, Edward Kmett
Let's try taking a step back here.
There are clearly two different audiences in mind here and that the parameters for debate here are too narrow for us to achieve consensus.
Maybe we can try changing the problem a bit and see if we can get there by another avenue.
Your audience would wants a claim that these functions do everything in their power to preserve accuracy.
My audience wants to be able to opportunistically grab accuracy without leaking it into the type and destroying the usability of their libraries for the broadest set of users.
I essence here it is your audience is the one seeking an extra guarantee/law.
Extra guarantees are the sort of thing one often denotes through a class.
However, putting them in a separate class destroys the utility of this proposal for me.
As a straw-man / olive-branch / half-baked idea:
Could we get you what you want by simply making an extra class to indicate the claim that the guarantee holds, and get what I want by placing these methods in the existing Floating with the defaults?
I rather don't like that solution, but I'm just trying to broaden the scope of debate, and at least expose where the fault lines lie in the design space, and find a way for us to stop speaking past each other.
-Edward
On Wed, Apr 23, 2014 at 11:38 PM, Edward Kmett
wrote: On Wed, Apr 23, 2014 at 8:16 PM, John Lato
wrote: Ah. Indeed, that was not what I thought you meant. But the user may not be compiling DodgyFloat; it may be provided via apt/rpm or similar.
That is a fair point.
Could you clarify one other thing? Do you think that \x -> log (1+x) behaves the same as log1p?
I believe that \x -> log (1 + x) is a passable approximation of log1p in the absence of a better alternative and that I'd rather have the user get something no worse than they get today if they refactored their code to take advantage of the extra capability we are exposing, than just wind up in a situation where they have to choose between trying to use it because the types say they should be able to call it and getting unexpected bottoms they can't protect against, so that the new functionality can't be used in a way that can be detected at compile time.
At this point we're just going around in circles.
Under your version of things we put them into a class in a way that everyone has to pay for it, and nobody including me gets to have enough faith that it won't crash when invoked to actually call it.
-Edward
On Wed, Apr 23, 2014 at 5:08 PM, Edward Kmett
wrote: I think you may have interpreted me as saying something I didn't try to say.
To clarify, what I was indicating was that during the compilation of your 'DodgyFloat' supplying package a bunch of warnings about unimplemented methods would scroll by.
-Edward
On Wed, Apr 23, 2014 at 8:06 PM, Edward Kmett
wrote: This does work.
MINIMAL is checked based on the definitions supplied locally in the instance, not based on the total definitions that contribute to the instance.
Otherwise we couldn't have the very poster-chid example of this from the documentation for MINIMAL
class Eq a where (==) :: a -> a -> Bool (/=) :: a -> a -> Bool x == y = not (x /= y) x /= y = not (x == y) {-# MINIMAL (==) | (/=) #-}
On Wed, Apr 23, 2014 at 7:57 PM, John Lato
wrote: There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
wrote: > > > * If you can compile sans warnings you have nothing to fear. If you > do get warnings, you can know precisely what types will have degraded back > to the old precision at compile time, not runtime. I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
> import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, > doesn't implement log1p etc. > > x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
> > On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov >
wrote: >> >> On 21 April 2014 09:38, John Lato wrote: >> > I was just wondering, why not simply numerically robust algorithms >> > as >> > defaults for these functions? No crashes, no errors, no loss of >> > precision, >> > everything would just work. They aren't particularly complicated, >> > so the >> > performance should even be reasonable. >> > >> I think it's best option. log1p and exp1m come with guarantees >> about precision. log(1+p) default makes it impossible to depend in >> such >> guarantees. They will silenly give wrong answer > > _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Thu, Apr 24, 2014 at 10:17 AM, Casey McCann
For default implementations, I would prefer the idea suggested earlier of a (probably slower) algorithm that does preserve precision rather than a naive version, if that's feasible. This lives up to the claimed benefit of higher precision, and as a general rule I feel that any pitfalls left for the unwary should at worst provide the correct result more slowly.
Now I'm wondering about a low-rent solution: Debug.Trace around the default so they get a runtime warning of loss of precision. (Since you can't always resolve the instance at compile time, it can't be done with a compile time pragma which would otherwise be my preferred solution.) -- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

The idea of a slower more careful default doesn't work. Without ratcheting
up the requirements to RealFloat you don't have any notion of precision to
play with, and moving it there rules out many important cases like Complex,
Quaternion, etc.
On Thu, Apr 24, 2014 at 10:17 AM, Casey McCann
I expect the largest audience involved here are the group that doesn't know or care about these functions, but definitely wants their code to work.
As such, I'm opposed to anything that would break code that doesn't need the benefits of these functions. Two specific scenarios come to mind:
- Floating instances written for DSLs or AST-like types (the only common example of Floating instances not already mentioned) needing implementations of functions their author may not have heard of in order to compile without warnings. This could be mitigated by good documentation and providing "defaultFoo" functions suitable for implementations that don't or can't do anything useful with these functions anyway.
- Programmers who don't actually need the extra precision using these functions anyway due to having a vague sense that they're "better". Yes, in my experience this sort of thing is a common mentality among programmers. Silently introducing runtime exceptions in this scenario seems completely unacceptable to my mind and I'm strongly opposed to any proposal involving that.
As far as I can see, together these rule out any proposal that would directly add functions to an existing class unless default implementations no worse than the status quo (in some sense) are provided.
For default implementations, I would prefer the idea suggested earlier of a (probably slower) algorithm that does preserve precision rather than a naive version, if that's feasible. This lives up to the claimed benefit of higher precision, and as a general rule I feel that any pitfalls left for the unwary should at worst provide the correct result more slowly.
Also, of all the people who might be impacted by this, I suspect that the group who really do need both speed and precision for number crunching are the most likely to know what they're doing and be aware of potential pitfalls.
- C.
Let's try taking a step back here.
There are clearly two different audiences in mind here and that the parameters for debate here are too narrow for us to achieve consensus.
Maybe we can try changing the problem a bit and see if we can get there by another avenue.
Your audience would wants a claim that these functions do everything in their power to preserve accuracy.
My audience wants to be able to opportunistically grab accuracy without leaking it into the type and destroying the usability of their libraries for the broadest set of users.
I essence here it is your audience is the one seeking an extra guarantee/law.
Extra guarantees are the sort of thing one often denotes through a class.
However, putting them in a separate class destroys the utility of this proposal for me.
As a straw-man / olive-branch / half-baked idea:
Could we get you what you want by simply making an extra class to indicate the claim that the guarantee holds, and get what I want by placing these methods in the existing Floating with the defaults?
I rather don't like that solution, but I'm just trying to broaden the scope of debate, and at least expose where the fault lines lie in the design space, and find a way for us to stop speaking past each other.
-Edward
On Wed, Apr 23, 2014 at 11:38 PM, Edward Kmett
wrote: On Wed, Apr 23, 2014 at 8:16 PM, John Lato
wrote: Ah. Indeed, that was not what I thought you meant. But the user may
not
be compiling DodgyFloat; it may be provided via apt/rpm or similar.
That is a fair point.
Could you clarify one other thing? Do you think that \x -> log (1+x) behaves the same as log1p?
I believe that \x -> log (1 + x) is a passable approximation of log1p in the absence of a better alternative and that I'd rather have the user get something no worse than they get today if they refactored their code to take advantage of the extra capability we are exposing, than just wind up in a situation where they have to choose between trying to use it because the types say they should be able to call it and getting unexpected bottoms
On Wed, Apr 23, 2014 at 11:55 PM, Edward Kmett
wrote: they can't protect against, so that the new functionality can't be used in a way that can be detected at compile time.
At this point we're just going around in circles.
Under your version of things we put them into a class in a way that everyone has to pay for it, and nobody including me gets to have enough faith that it won't crash when invoked to actually call it.
-Edward
On Wed, Apr 23, 2014 at 5:08 PM, Edward Kmett
wrote:
I think you may have interpreted me as saying something I didn't try
to
say.
To clarify, what I was indicating was that during the compilation of your 'DodgyFloat' supplying package a bunch of warnings about unimplemented methods would scroll by.
-Edward
On Wed, Apr 23, 2014 at 8:06 PM, Edward Kmett
wrote: This does work.
MINIMAL is checked based on the definitions supplied locally in the instance, not based on the total definitions that contribute to the instance.
Otherwise we couldn't have the very poster-chid example of this from the documentation for MINIMAL
class Eq a where (==) :: a -> a -> Bool (/=) :: a -> a -> Bool x == y = not (x /= y) x /= y = not (x == y) {-# MINIMAL (==) | (/=) #-}
On Wed, Apr 23, 2014 at 7:57 PM, John Lato
wrote: > > There's one part of this alternative proposal I don't understand: > > On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett > wrote: >> >> >> * If you can compile sans warnings you have nothing to fear. If you >> do get warnings, you can know precisely what types will have degraded back
>> to the old precision at compile time, not runtime. > > > I don't understand the mechanism by which this happens (maybe I'm > misunderstanding the MINIMAL pragma?). If a module has e.g. > > > import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, > > doesn't implement log1p etc. > > > > x = log1p 1e-10 :: DodgyFloat > > I don't understand why this would generate a warning (i.e. I don't > believe it will generate a warning). So the user is in the same situation > as with the original proposal. > > John L. > >> >> On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov >>
wrote: >>> >>> On 21 April 2014 09:38, John Lato wrote: >>> > I was just wondering, why not simply numerically robust algorithms >>> > as >>> > defaults for these functions? No crashes, no errors, no loss of >>> > precision, >>> > everything would just work. They aren't particularly complicated, >>> > so the >>> > performance should even be reasonable. >>> > >>> I think it's best option. log1p and exp1m come with guarantees >>> about precision. log(1+p) default makes it impossible to depend in >>> such >>> guarantees. They will silenly give wrong answer >> >> > _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

Agreed, (I tried thinking this through myself, Floating doesn't provide
enough structure to give you any notion of precision)
On Thu, Apr 24, 2014 at 10:37 AM, Edward Kmett
The idea of a slower more careful default doesn't work. Without ratcheting up the requirements to RealFloat you don't have any notion of precision to play with, and moving it there rules out many important cases like Complex, Quaternion, etc.
On Thu, Apr 24, 2014 at 10:17 AM, Casey McCann
wrote: I expect the largest audience involved here are the group that doesn't know or care about these functions, but definitely wants their code to work.
As such, I'm opposed to anything that would break code that doesn't need the benefits of these functions. Two specific scenarios come to mind:
- Floating instances written for DSLs or AST-like types (the only common example of Floating instances not already mentioned) needing implementations of functions their author may not have heard of in order to compile without warnings. This could be mitigated by good documentation and providing "defaultFoo" functions suitable for implementations that don't or can't do anything useful with these functions anyway.
- Programmers who don't actually need the extra precision using these functions anyway due to having a vague sense that they're "better". Yes, in my experience this sort of thing is a common mentality among programmers. Silently introducing runtime exceptions in this scenario seems completely unacceptable to my mind and I'm strongly opposed to any proposal involving that.
As far as I can see, together these rule out any proposal that would directly add functions to an existing class unless default implementations no worse than the status quo (in some sense) are provided.
For default implementations, I would prefer the idea suggested earlier of a (probably slower) algorithm that does preserve precision rather than a naive version, if that's feasible. This lives up to the claimed benefit of higher precision, and as a general rule I feel that any pitfalls left for the unwary should at worst provide the correct result more slowly.
Also, of all the people who might be impacted by this, I suspect that the group who really do need both speed and precision for number crunching are the most likely to know what they're doing and be aware of potential pitfalls.
- C.
Let's try taking a step back here.
There are clearly two different audiences in mind here and that the parameters for debate here are too narrow for us to achieve consensus.
Maybe we can try changing the problem a bit and see if we can get there by another avenue.
Your audience would wants a claim that these functions do everything in their power to preserve accuracy.
My audience wants to be able to opportunistically grab accuracy without leaking it into the type and destroying the usability of their
the broadest set of users.
I essence here it is your audience is the one seeking an extra guarantee/law.
Extra guarantees are the sort of thing one often denotes through a class.
However, putting them in a separate class destroys the utility of this proposal for me.
As a straw-man / olive-branch / half-baked idea:
Could we get you what you want by simply making an extra class to indicate the claim that the guarantee holds, and get what I want by placing these methods in the existing Floating with the defaults?
I rather don't like that solution, but I'm just trying to broaden the scope of debate, and at least expose where the fault lines lie in the design space, and find a way for us to stop speaking past each other.
-Edward
On Wed, Apr 23, 2014 at 11:38 PM, Edward Kmett
wrote: On Wed, Apr 23, 2014 at 8:16 PM, John Lato
wrote: Ah. Indeed, that was not what I thought you meant. But the user may
not
be compiling DodgyFloat; it may be provided via apt/rpm or similar.
That is a fair point.
Could you clarify one other thing? Do you think that \x -> log (1+x) behaves the same as log1p?
I believe that \x -> log (1 + x) is a passable approximation of log1p in the absence of a better alternative and that I'd rather have the user get something no worse than they get today if they refactored their code to take advantage of the extra capability we are exposing, than just wind up in a situation where they have to choose between trying to use it because
On Wed, Apr 23, 2014 at 11:55 PM, Edward Kmett
wrote: libraries for the types say they should be able to call it and getting unexpected bottoms they can't protect against, so that the new functionality can't be used in a way that can be detected at compile time.
At this point we're just going around in circles.
Under your version of things we put them into a class in a way that everyone has to pay for it, and nobody including me gets to have enough faith that it won't crash when invoked to actually call it.
-Edward
On Wed, Apr 23, 2014 at 5:08 PM, Edward Kmett
wrote:
I think you may have interpreted me as saying something I didn't try
to
say.
To clarify, what I was indicating was that during the compilation of your 'DodgyFloat' supplying package a bunch of warnings about unimplemented methods would scroll by.
-Edward
On Wed, Apr 23, 2014 at 8:06 PM, Edward Kmett
wrote: > > This does work. > > MINIMAL is checked based on the definitions supplied locally in the > instance, not based on the total definitions that contribute to the > instance. > > Otherwise we couldn't have the very poster-chid example of this from > the documentation for MINIMAL > > class Eq a where > (==) :: a -> a -> Bool > (/=) :: a -> a -> Bool > x == y = not (x /= y) > x /= y = not (x == y) > {-# MINIMAL (==) | (/=) #-} > > > > On Wed, Apr 23, 2014 at 7:57 PM, John Lato wrote: >> >> There's one part of this alternative proposal I don't understand: >> >> On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett >> wrote: >>> >>> >>> * If you can compile sans warnings you have nothing to fear. If you >>> do get warnings, you can know precisely what types will have degraded back >>> to the old precision at compile time, not runtime. >> >> >> I don't understand the mechanism by which this happens (maybe I'm >> misunderstanding the MINIMAL pragma?). If a module has e.g. >> >> > import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, >> > doesn't implement log1p etc. >> > >> > x = log1p 1e-10 :: DodgyFloat >> >> I don't understand why this would generate a warning (i.e. I don't >> believe it will generate a warning). So the user is in the same situation >> as with the original proposal. >> >> John L. >> >>> >>> On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov >>> wrote: >>>> >>>> On 21 April 2014 09:38, John Lato wrote: >>>> > I was just wondering, why not simply numerically robust algorithms >>>> > as >>>> > defaults for these functions? No crashes, no errors, no loss of >>>> > precision, >>>> > everything would just work. They aren't particularly complicated, >>>> > so the >>>> > performance should even be reasonable. >>>> > >>>> I think it's best option. log1p and exp1m come with guarantees >>>> about precision. log(1+p) default makes it impossible to depend in >>>> such >>>> guarantees. They will silenly give wrong answer >>> >>> >> > _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

Yes, I realized this after I suggested it. I don't think it's possible to preserve higher precision in any meaningful way. On Thu, Apr 24, 2014 at 8:16 AM, Carter Schonwald < carter.schonwald@gmail.com> wrote:
Agreed, (I tried thinking this through myself, Floating doesn't provide enough structure to give you any notion of precision)
On Thu, Apr 24, 2014 at 10:37 AM, Edward Kmett
wrote: The idea of a slower more careful default doesn't work. Without ratcheting up the requirements to RealFloat you don't have any notion of precision to play with, and moving it there rules out many important cases like Complex, Quaternion, etc.
On Thu, Apr 24, 2014 at 10:17 AM, Casey McCann
wrote: I expect the largest audience involved here are the group that doesn't know or care about these functions, but definitely wants their code to work.
As such, I'm opposed to anything that would break code that doesn't need the benefits of these functions. Two specific scenarios come to mind:
- Floating instances written for DSLs or AST-like types (the only common example of Floating instances not already mentioned) needing implementations of functions their author may not have heard of in order to compile without warnings. This could be mitigated by good documentation and providing "defaultFoo" functions suitable for implementations that don't or can't do anything useful with these functions anyway.
- Programmers who don't actually need the extra precision using these functions anyway due to having a vague sense that they're "better". Yes, in my experience this sort of thing is a common mentality among programmers. Silently introducing runtime exceptions in this scenario seems completely unacceptable to my mind and I'm strongly opposed to any proposal involving that.
As far as I can see, together these rule out any proposal that would directly add functions to an existing class unless default implementations no worse than the status quo (in some sense) are provided.
For default implementations, I would prefer the idea suggested earlier of a (probably slower) algorithm that does preserve precision rather than a naive version, if that's feasible. This lives up to the claimed benefit of higher precision, and as a general rule I feel that any pitfalls left for the unwary should at worst provide the correct result more slowly.
Also, of all the people who might be impacted by this, I suspect that the group who really do need both speed and precision for number crunching are the most likely to know what they're doing and be aware of potential pitfalls.
- C.
Let's try taking a step back here.
There are clearly two different audiences in mind here and that the parameters for debate here are too narrow for us to achieve consensus.
Maybe we can try changing the problem a bit and see if we can get
another avenue.
Your audience would wants a claim that these functions do everything in their power to preserve accuracy.
My audience wants to be able to opportunistically grab accuracy without leaking it into the type and destroying the usability of their
the broadest set of users.
I essence here it is your audience is the one seeking an extra guarantee/law.
Extra guarantees are the sort of thing one often denotes through a class.
However, putting them in a separate class destroys the utility of this proposal for me.
As a straw-man / olive-branch / half-baked idea:
Could we get you what you want by simply making an extra class to indicate the claim that the guarantee holds, and get what I want by placing
methods in the existing Floating with the defaults?
I rather don't like that solution, but I'm just trying to broaden the scope of debate, and at least expose where the fault lines lie in the design space, and find a way for us to stop speaking past each other.
-Edward
On Wed, Apr 23, 2014 at 11:38 PM, Edward Kmett
wrote: On Wed, Apr 23, 2014 at 8:16 PM, John Lato
wrote: Ah. Indeed, that was not what I thought you meant. But the user
may not
be compiling DodgyFloat; it may be provided via apt/rpm or similar.
That is a fair point.
Could you clarify one other thing? Do you think that \x -> log (1+x) behaves the same as log1p?
I believe that \x -> log (1 + x) is a passable approximation of log1p in the absence of a better alternative and that I'd rather have the user get something no worse than they get today if they refactored their code to take advantage of the extra capability we are exposing, than just wind up in a situation where they have to choose between trying to use it because
types say they should be able to call it and getting unexpected bottoms they can't protect against, so that the new functionality can't be used in a way that can be detected at compile time.
At this point we're just going around in circles.
Under your version of things we put them into a class in a way that everyone has to pay for it, and nobody including me gets to have enough faith that it won't crash when invoked to actually call it.
-Edward
On Wed, Apr 23, 2014 at 5:08 PM, Edward Kmett
wrote:
> > I think you may have interpreted me as saying something I didn't
> say. > > To clarify, what I was indicating was that during the compilation of > your 'DodgyFloat' supplying package a bunch of warnings about unimplemented > methods would scroll by. > > -Edward > > > On Wed, Apr 23, 2014 at 8:06 PM, Edward Kmett
wrote: >> >> This does work. >> >> MINIMAL is checked based on the definitions supplied locally in the >> instance, not based on the total definitions that contribute to the >> instance. >> >> Otherwise we couldn't have the very poster-chid example of this from >> the documentation for MINIMAL >> >> class Eq a where >> (==) :: a -> a -> Bool >> (/=) :: a -> a -> Bool >> x == y = not (x /= y) >> x /= y = not (x == y) >> {-# MINIMAL (==) | (/=) #-} >> >> >> >> On Wed, Apr 23, 2014 at 7:57 PM, John Lato wrote: >>> >>> There's one part of this alternative proposal I don't understand: >>> >>> On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett >>> wrote: >>>> >>>> >>>> * If you can compile sans warnings you have nothing to fear. If you >>>> do get warnings, you can know precisely what types will have degraded back >>>> to the old precision at compile time, not runtime. >>> >>> >>> I don't understand the mechanism by which this happens (maybe I'm >>> misunderstanding the MINIMAL pragma?). If a module has e.g. >>> >>> > import DodgyFloat (DodgyFloat) -- defined in a 3rd-party On Wed, Apr 23, 2014 at 11:55 PM, Edward Kmett
wrote: there by libraries for these the try to package, >>> > doesn't implement log1p etc. >>> > >>> > x = log1p 1e-10 :: DodgyFloat >>> >>> I don't understand why this would generate a warning (i.e. I don't >>> believe it will generate a warning). So the user is in the same situation >>> as with the original proposal. >>> >>> John L. >>> >>>> >>>> On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov >>>>
wrote: >>>>> >>>>> On 21 April 2014 09:38, John Lato wrote: >>>>> > I was just wondering, why not simply numerically robust algorithms >>>>> > as >>>>> > defaults for these functions? No crashes, no errors, no loss of >>>>> > precision, >>>>> > everything would just work. They aren't particularly complicated, >>>>> > so the >>>>> > performance should even be reasonable. >>>>> > >>>>> I think it's best option. log1p and exp1m come with guarantees >>>>> about precision. log(1+p) default makes it impossible to depend in >>>>> such >>>>> guarantees. They will silenly give wrong answer >>>> >>>> >>> >> > _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

Let me try to be a bit concrete here. Are there _any_ implementations of Floating outside of base that we know of, which we are _concretely_ worried will not implement log1p and thus cause algos to lose precision? If anybody knows of these implementations, let them speak! Furthermore, if we do know of them, then can't we submit patch requests for them? Unless there are too many of course, but I know of only one type of "typical" implementation of Floating outside of base. That implementation is constructive, arbitrary-precision, reals, and in that case, the default implementation should be fine. (Outside of that, I know of two other perhaps implementations outside of base, one by edwardk, and he as well as the other author are fine adding log1p). Also, in general, I don't care what happens to Floating, because it is a silly class with a hodgepodge of methods anyway (plenty of which potentially apply to things that aren't 'floating point' in any meaningful sense), although RealFloat is even sillier. (By the way did you know that RealFloat has a defaulted "atan2" method? Whatever we do, it won't be worse than that). Anyway, +1 for the original proposal, and also +1 for adding this to RealFloat instead if that's acceptable, because I'm sure everyone could agree that class couldn't possibly get much worse, and there's precedent there anyway. Also, I should add, as a rule, I think it is near-impossible to write numerical code where you genuinely care both about performance and accuracy in such a way as to be actually generic over the concrete representations involved. Cheers, Gershom On 4/23/14, 7:57 PM, John Lato wrote:
There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
mailto:ekmett@gmail.com> wrote: * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov
mailto:alexey.skladnoy@gmail.com> wrote: On 21 April 2014 09:38, John Lato
mailto:jwlato@gmail.com> wrote: > I was just wondering, why not simply numerically robust algorithms as > defaults for these functions? No crashes, no errors, no loss of precision, > everything would just work. They aren't particularly complicated, so the > performance should even be reasonable. > I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

well said! +1 to those points as well
(i was about to write a similar email, and Gershom said the points much
better than I)
+1!
On Thu, Apr 24, 2014 at 12:46 AM, Gershom Bazerman
Let me try to be a bit concrete here.
Are there _any_ implementations of Floating outside of base that we know of, which we are _concretely_ worried will not implement log1p and thus cause algos to lose precision? If anybody knows of these implementations, let them speak!
Furthermore, if we do know of them, then can't we submit patch requests for them? Unless there are too many of course, but I know of only one type of "typical" implementation of Floating outside of base. That implementation is constructive, arbitrary-precision, reals, and in that case, the default implementation should be fine.
(Outside of that, I know of two other perhaps implementations outside of base, one by edwardk, and he as well as the other author are fine adding log1p).
Also, in general, I don't care what happens to Floating, because it is a silly class with a hodgepodge of methods anyway (plenty of which potentially apply to things that aren't 'floating point' in any meaningful sense), although RealFloat is even sillier. (By the way did you know that RealFloat has a defaulted "atan2" method? Whatever we do, it won't be worse than that).
Anyway, +1 for the original proposal, and also +1 for adding this to RealFloat instead if that's acceptable, because I'm sure everyone could agree that class couldn't possibly get much worse, and there's precedent there anyway.
Also, I should add, as a rule, I think it is near-impossible to write numerical code where you genuinely care both about performance and accuracy in such a way as to be actually generic over the concrete representations involved.
Cheers, Gershom
On 4/23/14, 7:57 PM, John Lato wrote:
There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
wrote: * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov < alexey.skladnoy@gmail.com> wrote:
I was just wondering, why not simply numerically robust algorithms as defaults for these functions? No crashes, no errors, no loss of
everything would just work. They aren't particularly complicated, so
On 21 April 2014 09:38, John Lato
wrote: precision, the performance should even be reasonable.
I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer
_______________________________________________ Libraries mailing listLibraries@haskell.orghttp://www.haskell.org/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

No, but due to the open nature of type classes I don't think a lack of
specific instances should hinder good engineering principles. And it would
be much simpler to find them if they crash when people try to use them
instead of failing silently.
Anyway, I think my position is pretty clear by now. If the majority thinks
incorrect behavior is better than bottoming out I won't say any more
against it.
On Wed, Apr 23, 2014 at 9:46 PM, Gershom Bazerman
Let me try to be a bit concrete here.
Are there _any_ implementations of Floating outside of base that we know of, which we are _concretely_ worried will not implement log1p and thus cause algos to lose precision? If anybody knows of these implementations, let them speak!
Furthermore, if we do know of them, then can't we submit patch requests for them? Unless there are too many of course, but I know of only one type of "typical" implementation of Floating outside of base. That implementation is constructive, arbitrary-precision, reals, and in that case, the default implementation should be fine.
(Outside of that, I know of two other perhaps implementations outside of base, one by edwardk, and he as well as the other author are fine adding log1p).
Also, in general, I don't care what happens to Floating, because it is a silly class with a hodgepodge of methods anyway (plenty of which potentially apply to things that aren't 'floating point' in any meaningful sense), although RealFloat is even sillier. (By the way did you know that RealFloat has a defaulted "atan2" method? Whatever we do, it won't be worse than that).
Anyway, +1 for the original proposal, and also +1 for adding this to RealFloat instead if that's acceptable, because I'm sure everyone could agree that class couldn't possibly get much worse, and there's precedent there anyway.
Also, I should add, as a rule, I think it is near-impossible to write numerical code where you genuinely care both about performance and accuracy in such a way as to be actually generic over the concrete representations involved.
Cheers, Gershom
On 4/23/14, 7:57 PM, John Lato wrote:
There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
wrote: * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov < alexey.skladnoy@gmail.com> wrote:
I was just wondering, why not simply numerically robust algorithms as defaults for these functions? No crashes, no errors, no loss of
everything would just work. They aren't particularly complicated, so
On 21 April 2014 09:38, John Lato
wrote: precision, the performance should even be reasonable.
I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer
_______________________________________________ Libraries mailing listLibraries@haskell.orghttp://www.haskell.org/mailman/listinfo/libraries

Fwiw, I have two Floating instances in my code that I have checked out
on this laptop: one is for a dual number type, and the other is for a
numeric type augmented with an infinity. I don't think either of these
is of much significance, but if I have two, I'm sure there are quite a
lot out there.
My sympathies lie with John Lato's concern regarding the dangers of
suggesting precision that isn't there, but I'd be happy with something
that issued a warning at compile time.
Anthony
On Thu, Apr 24, 2014 at 12:46 AM, Gershom Bazerman
Let me try to be a bit concrete here.
Are there _any_ implementations of Floating outside of base that we know of, which we are _concretely_ worried will not implement log1p and thus cause algos to lose precision? If anybody knows of these implementations, let them speak!
Furthermore, if we do know of them, then can't we submit patch requests for them? Unless there are too many of course, but I know of only one type of "typical" implementation of Floating outside of base. That implementation is constructive, arbitrary-precision, reals, and in that case, the default implementation should be fine.
(Outside of that, I know of two other perhaps implementations outside of base, one by edwardk, and he as well as the other author are fine adding log1p).
Also, in general, I don't care what happens to Floating, because it is a silly class with a hodgepodge of methods anyway (plenty of which potentially apply to things that aren't 'floating point' in any meaningful sense), although RealFloat is even sillier. (By the way did you know that RealFloat has a defaulted "atan2" method? Whatever we do, it won't be worse than that).
Anyway, +1 for the original proposal, and also +1 for adding this to RealFloat instead if that's acceptable, because I'm sure everyone could agree that class couldn't possibly get much worse, and there's precedent there anyway.
Also, I should add, as a rule, I think it is near-impossible to write numerical code where you genuinely care both about performance and accuracy in such a way as to be actually generic over the concrete representations involved.
Cheers, Gershom
On 4/23/14, 7:57 PM, John Lato wrote:
There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
wrote: * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at compile time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov
wrote: On 21 April 2014 09:38, John Lato
wrote: I was just wondering, why not simply numerically robust algorithms as defaults for these functions? No crashes, no errors, no loss of precision, everything would just work. They aren't particularly complicated, so the performance should even be reasonable.
I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Thu, Apr 24, 2014 at 12:46 AM, Gershom Bazerman
Let me try to be a bit concrete here.
Are there _any_ implementations of Floating outside of base that we know of, which we are _concretely_ worried will not implement log1p and thus cause algos to lose precision? If anybody knows of these implementations, let them speak!
I would like to avoid shoving this up to RealFloat for the simple pragmatic reason that it takes us right back where we started in many ways, the naive version of the algorithm would have weaker constraints, and so the thing that shouldn't need to exist would have an artificial lease on life as a result.
Furthermore, if we do know of them, then can't we submit patch requests for them? Unless there are too many of course, but I know of only one type of "typical" implementation of Floating outside of base. That implementation is constructive, arbitrary-precision, reals, and in that case, the default implementation should be fine.
The major implementations beyond base are in linear, vector-space, ad, numbers' CReal, and debug-simplereflect. There may be a dozen other automatic differentiation implementations out there, e.g. fad, my old rad, Conal's, old versions of monoids, etc. That said, on this John's point is true, it is an open universe, there can be a lot of them out there. That _also_ said, if we went with something like the MINIMAL pragma with default approach that we were just discussing those 'private application instances' would be the things people build locally that *would* blast them with warnings. So that might suggest the concrete implementation strategy: Add the methods to Floating with defaults, but include them in MINIMAL as in my previous modified proposal, but also commit to going through hackage looking for existing instances and reach out to authors to patch / with patches. That pass over hackage might spackle over John's objection to default + MINIMAL in that it doesn't catch _everything_ for folks who install via a package manager, as the stuff that gets installed by such means after all starts out in the world of hackage. ... and with that we could all move on to other things. ;) -Edward
(Outside of that, I know of two other perhaps implementations outside of base, one by edwardk, and he as well as the other author are fine adding log1p).
Also, in general, I don't care what happens to Floating, because it is a silly class with a hodgepodge of methods anyway (plenty of which potentially apply to things that aren't 'floating point' in any meaningful sense), although RealFloat is even sillier. (By the way did you know that RealFloat has a defaulted "atan2" method? Whatever we do, it won't be worse than that).
Anyway, +1 for the original proposal, and also +1 for adding this to RealFloat instead if that's acceptable, because I'm sure everyone could agree that class couldn't possibly get much worse, and there's precedent there anyway.
Also, I should add, as a rule, I think it is near-impossible to write numerical code where you genuinely care both about performance and accuracy in such a way as to be actually generic over the concrete representations involved.
Cheers, Gershom
On 4/23/14, 7:57 PM, John Lato wrote:
There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
wrote: * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov < alexey.skladnoy@gmail.com> wrote:
I was just wondering, why not simply numerically robust algorithms as defaults for these functions? No crashes, no errors, no loss of
everything would just work. They aren't particularly complicated, so
On 21 April 2014 09:38, John Lato
wrote: precision, the performance should even be reasonable.
I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer
_______________________________________________ Libraries mailing listLibraries@haskell.orghttp://www.haskell.org/mailman/listinfo/libraries

sounds good to me! (otoh, i generally agree with your proposals)
On Thu, Apr 24, 2014 at 1:06 AM, Edward Kmett
On Thu, Apr 24, 2014 at 12:46 AM, Gershom Bazerman
wrote: Let me try to be a bit concrete here.
Are there _any_ implementations of Floating outside of base that we know of, which we are _concretely_ worried will not implement log1p and thus cause algos to lose precision? If anybody knows of these implementations, let them speak!
I would like to avoid shoving this up to RealFloat for the simple pragmatic reason that it takes us right back where we started in many ways, the naive version of the algorithm would have weaker constraints, and so the thing that shouldn't need to exist would have an artificial lease on life as a result.
Furthermore, if we do know of them, then can't we submit patch requests for them? Unless there are too many of course, but I know of only one type of "typical" implementation of Floating outside of base. That implementation is constructive, arbitrary-precision, reals, and in that case, the default implementation should be fine.
The major implementations beyond base are in linear, vector-space, ad, numbers' CReal, and debug-simplereflect.
There may be a dozen other automatic differentiation implementations out there, e.g. fad, my old rad, Conal's, old versions of monoids, etc.
That said, on this John's point is true, it is an open universe, there can be a lot of them out there.
That _also_ said, if we went with something like the MINIMAL pragma with default approach that we were just discussing those 'private application instances' would be the things people build locally that *would* blast them with warnings.
So that might suggest the concrete implementation strategy:
Add the methods to Floating with defaults, but include them in MINIMAL as in my previous modified proposal, but also commit to going through hackage looking for existing instances and reach out to authors to patch / with patches.
That pass over hackage might spackle over John's objection to default + MINIMAL in that it doesn't catch _everything_ for folks who install via a package manager, as the stuff that gets installed by such means after all starts out in the world of hackage.
... and with that we could all move on to other things. ;)
-Edward
(Outside of that, I know of two other perhaps implementations outside of base, one by edwardk, and he as well as the other author are fine adding log1p).
Also, in general, I don't care what happens to Floating, because it is a silly class with a hodgepodge of methods anyway (plenty of which potentially apply to things that aren't 'floating point' in any meaningful sense), although RealFloat is even sillier. (By the way did you know that RealFloat has a defaulted "atan2" method? Whatever we do, it won't be worse than that).
Anyway, +1 for the original proposal, and also +1 for adding this to RealFloat instead if that's acceptable, because I'm sure everyone could agree that class couldn't possibly get much worse, and there's precedent there anyway.
Also, I should add, as a rule, I think it is near-impossible to write numerical code where you genuinely care both about performance and accuracy in such a way as to be actually generic over the concrete representations involved.
Cheers, Gershom
On 4/23/14, 7:57 PM, John Lato wrote:
There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
wrote: * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov < alexey.skladnoy@gmail.com> wrote:
I was just wondering, why not simply numerically robust algorithms as defaults for these functions? No crashes, no errors, no loss of
everything would just work. They aren't particularly complicated, so
On 21 April 2014 09:38, John Lato
wrote: precision, the performance should even be reasonable.
I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer
_______________________________________________ Libraries mailing listLibraries@haskell.orghttp://www.haskell.org/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

I'm not entirely sure how I feel about the default + MINIMAL proposal; I've
tried to take some time to think about it. There are still a few cracks
for the unwary from my POV, but they're pretty small. There won't be any
old code using log1p etc, and any new instances will be suitably warned. I
have no reservations about choosing this path.
I would feel better about selecting it as an implementation strategy if
there was a bit more feedback from others, especially as there was already
some support for the OP.
John
On Wed, Apr 23, 2014 at 10:06 PM, Edward Kmett
On Thu, Apr 24, 2014 at 12:46 AM, Gershom Bazerman
wrote: Let me try to be a bit concrete here.
Are there _any_ implementations of Floating outside of base that we know of, which we are _concretely_ worried will not implement log1p and thus cause algos to lose precision? If anybody knows of these implementations, let them speak!
I would like to avoid shoving this up to RealFloat for the simple pragmatic reason that it takes us right back where we started in many ways, the naive version of the algorithm would have weaker constraints, and so the thing that shouldn't need to exist would have an artificial lease on life as a result.
Furthermore, if we do know of them, then can't we submit patch requests for them? Unless there are too many of course, but I know of only one type of "typical" implementation of Floating outside of base. That implementation is constructive, arbitrary-precision, reals, and in that case, the default implementation should be fine.
The major implementations beyond base are in linear, vector-space, ad, numbers' CReal, and debug-simplereflect.
There may be a dozen other automatic differentiation implementations out there, e.g. fad, my old rad, Conal's, old versions of monoids, etc.
That said, on this John's point is true, it is an open universe, there can be a lot of them out there.
That _also_ said, if we went with something like the MINIMAL pragma with default approach that we were just discussing those 'private application instances' would be the things people build locally that *would* blast them with warnings.
So that might suggest the concrete implementation strategy:
Add the methods to Floating with defaults, but include them in MINIMAL as in my previous modified proposal, but also commit to going through hackage looking for existing instances and reach out to authors to patch / with patches.
That pass over hackage might spackle over John's objection to default + MINIMAL in that it doesn't catch _everything_ for folks who install via a package manager, as the stuff that gets installed by such means after all starts out in the world of hackage.
... and with that we could all move on to other things. ;)
-Edward
(Outside of that, I know of two other perhaps implementations outside of base, one by edwardk, and he as well as the other author are fine adding log1p).
Also, in general, I don't care what happens to Floating, because it is a silly class with a hodgepodge of methods anyway (plenty of which potentially apply to things that aren't 'floating point' in any meaningful sense), although RealFloat is even sillier. (By the way did you know that RealFloat has a defaulted "atan2" method? Whatever we do, it won't be worse than that).
Anyway, +1 for the original proposal, and also +1 for adding this to RealFloat instead if that's acceptable, because I'm sure everyone could agree that class couldn't possibly get much worse, and there's precedent there anyway.
Also, I should add, as a rule, I think it is near-impossible to write numerical code where you genuinely care both about performance and accuracy in such a way as to be actually generic over the concrete representations involved.
Cheers, Gershom
On 4/23/14, 7:57 PM, John Lato wrote:
There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
wrote: * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov < alexey.skladnoy@gmail.com> wrote:
I was just wondering, why not simply numerically robust algorithms as defaults for these functions? No crashes, no errors, no loss of
everything would just work. They aren't particularly complicated, so
On 21 April 2014 09:38, John Lato
wrote: precision, the performance should even be reasonable.
I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer
_______________________________________________ Libraries mailing listLibraries@haskell.orghttp://www.haskell.org/mailman/listinfo/libraries

Default + MINIMAL is at least just the OP + better warnings for users who
are doing something that would make you uncomfortable, so in that sense it
is a fairly conservative change.
-Edward
On Thu, Apr 24, 2014 at 1:35 AM, John Lato
I'm not entirely sure how I feel about the default + MINIMAL proposal; I've tried to take some time to think about it. There are still a few cracks for the unwary from my POV, but they're pretty small. There won't be any old code using log1p etc, and any new instances will be suitably warned. I have no reservations about choosing this path.
I would feel better about selecting it as an implementation strategy if there was a bit more feedback from others, especially as there was already some support for the OP.
John
On Wed, Apr 23, 2014 at 10:06 PM, Edward Kmett
wrote: On Thu, Apr 24, 2014 at 12:46 AM, Gershom Bazerman
wrote: Let me try to be a bit concrete here.
Are there _any_ implementations of Floating outside of base that we know of, which we are _concretely_ worried will not implement log1p and thus cause algos to lose precision? If anybody knows of these implementations, let them speak!
I would like to avoid shoving this up to RealFloat for the simple pragmatic reason that it takes us right back where we started in many ways, the naive version of the algorithm would have weaker constraints, and so the thing that shouldn't need to exist would have an artificial lease on life as a result.
Furthermore, if we do know of them, then can't we submit patch requests for them? Unless there are too many of course, but I know of only one type of "typical" implementation of Floating outside of base. That implementation is constructive, arbitrary-precision, reals, and in that case, the default implementation should be fine.
The major implementations beyond base are in linear, vector-space, ad, numbers' CReal, and debug-simplereflect.
There may be a dozen other automatic differentiation implementations out there, e.g. fad, my old rad, Conal's, old versions of monoids, etc.
That said, on this John's point is true, it is an open universe, there can be a lot of them out there.
That _also_ said, if we went with something like the MINIMAL pragma with default approach that we were just discussing those 'private application instances' would be the things people build locally that *would* blast them with warnings.
So that might suggest the concrete implementation strategy:
Add the methods to Floating with defaults, but include them in MINIMAL as in my previous modified proposal, but also commit to going through hackage looking for existing instances and reach out to authors to patch / with patches.
That pass over hackage might spackle over John's objection to default + MINIMAL in that it doesn't catch _everything_ for folks who install via a package manager, as the stuff that gets installed by such means after all starts out in the world of hackage.
... and with that we could all move on to other things. ;)
-Edward
(Outside of that, I know of two other perhaps implementations outside of base, one by edwardk, and he as well as the other author are fine adding log1p).
Also, in general, I don't care what happens to Floating, because it is a silly class with a hodgepodge of methods anyway (plenty of which potentially apply to things that aren't 'floating point' in any meaningful sense), although RealFloat is even sillier. (By the way did you know that RealFloat has a defaulted "atan2" method? Whatever we do, it won't be worse than that).
Anyway, +1 for the original proposal, and also +1 for adding this to RealFloat instead if that's acceptable, because I'm sure everyone could agree that class couldn't possibly get much worse, and there's precedent there anyway.
Also, I should add, as a rule, I think it is near-impossible to write numerical code where you genuinely care both about performance and accuracy in such a way as to be actually generic over the concrete representations involved.
Cheers, Gershom
On 4/23/14, 7:57 PM, John Lato wrote:
There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
wrote: * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov < alexey.skladnoy@gmail.com> wrote:
I was just wondering, why not simply numerically robust algorithms as defaults for these functions? No crashes, no errors, no loss of
On 21 April 2014 09:38, John Lato
wrote: precision, everything would just work. They aren't particularly complicated, so the performance should even be reasonable.
I think it's best option. log1p and exp1m come with guarantees about precision. log(1+p) default makes it impossible to depend in such guarantees. They will silenly give wrong answer
_______________________________________________ Libraries mailing listLibraries@haskell.orghttp://www.haskell.org/mailman/listinfo/libraries

Am 24.04.2014 06:46, schrieb Gershom Bazerman:
Let me try to be a bit concrete here.
Are there _any_ implementations of Floating outside of base that we know of, which we are _concretely_ worried will not implement log1p and thus cause algos to lose precision? If anybody knows of these implementations, let them speak!
There are libqd bindings, I have implementations of arbitrary precision arithmetic based on Rational and lazy lists of digits. However you wanted to know examples of libraries where log1p and expm1 might not be implemented and I assume that if we knew such a library, then we would be one step closer to fixing it.
Also, in general, I don't care what happens to Floating, because it is a silly class with a hodgepodge of methods anyway (plenty of which potentially apply to things that aren't 'floating point' in any meaningful sense), although RealFloat is even sillier. (By the way did you know that RealFloat has a defaulted "atan2" method? Whatever we do, it won't be worse than that).
I follow the discussion in order to find out what to do with numeric-prelude. I might add log1p and expm1 methods to the Transcendental class. However I could do this completely independent from 'base' by calling log1p and expm1 from glib or gsl.
Anyway, +1 for the original proposal, and also +1 for adding this to RealFloat instead if that's acceptable, because I'm sure everyone could agree that class couldn't possibly get much worse, and there's precedent there anyway.
RealFloat would exclude the Complex instance.
Also, I should add, as a rule, I think it is near-impossible to write numerical code where you genuinely care both about performance and accuracy in such a way as to be actually generic over the concrete representations involved.
That's certainly true, thus a new Fused class might not be the worst solution. As I understood such classes already exist in Hackage packages.

On Mon, Apr 21, 2014 at 8:04 AM, Edward Kmett
I didn't want to make everyone pay for this feature, but as a number of folks feel strongly about it, then let's explore the following modified concrete proposal as an alternative.
* Add log1p, expm1, log1pexp, log1mexp to Floating with the defaults, but extend the MINIMAL pragma for Floating to include them, so everyone gets told to implement them.
Giving the (\x -> log (1 + x)) etc defaults but using MINIMAL pragma to ensure that relying on this default causes compile-time warnings seems like a good way to resolve the current dispute. Personally, I'd prefer the approach of crashing loudly rather than silently losing precision. Because I almost always prefer things to crash loudly rather than fail silently; but then, I delight in the hair shirt. However, without getting a few more voices in here, I'm not sure we're going to get anywhere with the debate. John, Edward, and I all know where each other stand, and we all admit it's something of an aesthetic debate. I wonder if any of the other folks from the numerical haskell community[1][2] would like to offer their perspectives. [1] #numerical-haskell on freenode [2] https://groups.google.com/forum/#!forum/numericalhaskell -- Live well, ~wren

On Sun, Apr 20, 2014 at 11:25:36PM -0400, Edward Kmett wrote:
I use log1p as a better log (1 + x).
It lets me pick up a few decimal places worth of accuracy opportunistically.
But isn't it a bit unfortunate to take a function name that is known and used in numercial circles to get a higher accuracy and in the Haskell world it only might give you a higher accuracy? I'm not that much into numerics, but are you really using log1p without also really needing it? If you're using log1p to possibly get a higher accuracy but don't really need it, why you're using it in the first place? Both solutions - a log1p without a higher accuracy and throwing an error - seem to be somehow unsatisfactory. It might be nice to be able to define a warning for default implementations that might not do what you expect, but this also just seems like another workaround. Is it completely out of question to have another type class for these higher accuracy functions? Sure, if you're having some code that's generic on Floating, which should just work with and without these higher accuracy funtions, then this is out of question. But then at least IMHO the function names should somehow indicate that you can't be sure that they return a higher accuracy. Greetings, Daniel

Am 21.04.2014 10:34, schrieb Daniel Trstenjak:
On Sun, Apr 20, 2014 at 11:25:36PM -0400, Edward Kmett wrote:
I use log1p as a better log (1 + x).
It lets me pick up a few decimal places worth of accuracy opportunistically.
But isn't it a bit unfortunate to take a function name that is known and used in numercial circles to get a higher accuracy and in the Haskell world it only might give you a higher accuracy?
I guess in other languages expm1 and log1p are fixed to Double and Float. If you stay with Double or Float in Haskell then you will get the higher precision, otherwise the precision might be better, but never worse.
I'm not that much into numerics, but are you really using log1p without also really needing it?
If you're using log1p to possibly get a higher accuracy but don't really need it, why you're using it in the first place?
So far I have not needed log1p and expm1, at all, although I use log and exp regularly. I guess every programmer can mechanically replace log(1+x) and (exp x - 1) by (log1p x) and (expm1 x), respectively, and never makes things worse.
Is it completely out of question to have another type class for these higher accuracy functions?
I am uncertain. Since other fused functions were mentioned, a new Fused or HighPrecision class might be sensible.
Sure, if you're having some code that's generic on Floating, which should just work with and without these higher accuracy funtions, then this is out of question.
But then at least IMHO the function names should somehow indicate that you can't be sure that they return a higher accuracy.
This seems to lead to a solution were log1p and expm1 are added to Floating with default implementations and a new Fused class containing the methods Fused.log1p and Fused.expm1 with no default implementations. log1p and expm1 can be used if you like to benefit from higher precision but do not rely on it, and Fused.log1p and Fused.expm1 if you need the higher precision.

On Sun, Apr 20, 2014 at 11:25 PM, Edward Kmett
On Sun, Apr 20, 2014 at 10:49 PM, John Lato
wrote: +0.1 to the OP +0.5 for error defaults +1 for no defaults
I'm +1 for the OP. I'm +0.5 for no defaults, with a full export from the Prelude. I'm -1 on error defaults, as it leaves me worse off than I am today.
I believe we've reached a consensus that no defaults is superior to error defaults. And I agree! So that's one bird down :) -- Live well, ~wren

On Thu, Apr 24, 2014 at 5:38 AM, wren romano
I believe we've reached a consensus that no defaults is superior to error defaults. And I agree! So that's one bird down :)
If I understand correctly, that's going to break user code without a
deprecation cycle and as such gets an enthusiastic -1 from me.
G
--
Gregory Collins

On Thu, Apr 24, 2014 at 10:29 AM, Gregory Collins
On Thu, Apr 24, 2014 at 5:38 AM, wren romano
wrote: I believe we've reached a consensus that no defaults is superior to error defaults. And I agree! So that's one bird down :)
If I understand correctly, that's going to break user code without a deprecation cycle and as such gets an enthusiastic -1 from me.
If that's the case, -1 from me too. Breaking user code for a change which is useful to a tiny minority is not worth it. Otherwise I'm +1 having the functions in general.

It sounds like we're converging to a solution whereby they do get defaults,
but you also get a MINIMAL pragma warning you you probably want to override
them for manual Floating instances, so users will get a warning not an
error if they omit implementations, and switching over will not hurt
existing code.
That warning should serve much the same role as the deprecation cycle would
and no existing user code will break.
On Thu, Apr 24, 2014 at 4:38 AM, Johan Tibell
On Thu, Apr 24, 2014 at 10:29 AM, Gregory Collins
wrote:
On Thu, Apr 24, 2014 at 5:38 AM, wren romano
wrote: I believe we've reached a consensus that no defaults is superior to error defaults. And I agree! So that's one bird down :)
If I understand correctly, that's going to break user code without a deprecation cycle and as such gets an enthusiastic -1 from me.
If that's the case, -1 from me too. Breaking user code for a change which is useful to a tiny minority is not worth it.
Otherwise I'm +1 having the functions in general.
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

hi,
It sounds like we're converging to a solution whereby they do get defaults, but you also get a MINIMAL pragma warning you you probably want to override them for manual Floating instances, so users will get a warning not an error if they omit implementations, and switching over will not hurt existing code.
That warning should serve much the same role as the deprecation cycle would and no existing user code will break.
i assume "they" above means all proposed functions (not only log1p and expm1). if so, i am +1, if not, i am still +1. cheers, tobias florek

Greg, Johan, that's not the proposal at the end of the thread as of last
night :-)
On Thursday, April 24, 2014, Johan Tibell
On Thu, Apr 24, 2014 at 10:29 AM, Gregory Collins
javascript:_e(%7B%7D,'cvml','greg@gregorycollins.net'); wrote:
On Thu, Apr 24, 2014 at 5:38 AM, wren romano
javascript:_e(%7B%7D,'cvml','winterkoninkje@gmail.com'); wrote:
I believe we've reached a consensus that no defaults is superior to error defaults. And I agree! So that's one bird down :)
If I understand correctly, that's going to break user code without a deprecation cycle and as such gets an enthusiastic -1 from me.
If that's the case, -1 from me too. Breaking user code for a change which is useful to a tiny minority is not worth it.
Otherwise I'm +1 having the functions in general.

I'm not advocating that we use the opcodes themselves. That is pretty much tantamount to numerical precision suicide as most of them have pretty arcane restrictions on their valid input ranges that vary by platform and expect you to play games with scaling or newton-raphson as well. You need a fair bit of wrapping around the available floating point opcodes. e.g. in glibc on x86 expm1 looks like: .text ENTRY(__expm1) fldl 4(%esp) // x fxam // Is NaN or +-Inf? fstsw %ax movb $0x45, %ch andb %ah, %ch cmpb $0x40, %ch je 3f // If +-0, jump. #ifdef PIC LOAD_PIC_REG (dx) #endif cmpb $0x05, %ch je 2f // If +-Inf, jump. fldt MO(l2e) // log2(e) : x fmulp // log2(e)*x fld %st // log2(e)*x : log2(e)*x frndint // int(log2(e)*x) : log2(e)*x fsubr %st, %st(1) // int(log2(e)*x) : fract(log2(e)*x) fxch // fract(log2(e)*x) : int(log2(e)*x) f2xm1 // 2^fract(log2(e)*x)-1 : int(log2(e)*x) fscale // 2^(log2(e)*x)-2^int(log2(e)*x) : int(log2(e)*x) fxch // int(log2(e)*x) : 2^(log2(e)*x)-2^int(log2(e)*x) fldl MO(one) // 1 : int(log2(e)*x) : 2^(log2(e)*x)-2^int(log2(e)*x) fscale // 2^int(log2(e)*x) : int(log2(e)*x) : 2^(log2(e)*x)-2^int(log2(e)*x) fsubrl MO(one) // 1-2^int(log2(e)*x) : int(log2(e)*x) : 2^(log2(e)*x)-2^int(log2(e)*x) fstp %st(1) // 1-2^int(log2(e)*x) : 2^(log2(e)*x)-2^int(log2(e)*x) fsubrp %st, %st(1) // 2^(log2(e)*x) ret 2: testl $0x200, %eax // Test sign. jz 3f // If positive, jump. fstp %st fldl MO(minus1) // Set result to -1.0. 3: ret I don't think we want to get into replicating that logic. =) -Edward On Thu, Apr 17, 2014 at 2:48 PM, Henning Thielemann < schlepptop@henning-thielemann.de> wrote:
Am 17.04.2014 19:15, schrieb Edward Kmett:
log1p and expm1 are C standard library functions that are important for
work with exponentials and logarithms.
I propose adding them to the Floating class where it is defined in GHC.Float.
No question, these functions are useful. But I think there should be two proposals:
1) Add log1pFloat, log1pDouble, expm1Float, expm1Double to GHC.Float 2) Extend Floating class with log1p and expm1 methods.
I think the first item is unproblematic since it is a simple addition. Since FPUs sometimes directly implement log1p and expm1 functions, I wonder whether GHC also should support the according machine instructions. E.g. x86 has F2XM1 and FYL2XP1 and good old MC68882 had FETOXM1 and FLOGNP1.
The second item means to alter the Floating class which affects all custom Floating instances. I think one should add default implementations. They don't have an numerical advantage but they save programmers from code breakage.
We do not have to export these from Prelude. My knee-jerk reaction is
that we probably shouldn't.
Not exporting them from Prelude still means to export them from GHC.Float, right? I mean, users must be able to implement these methods for custom types like extended precision floating point numbers as provided by libqd. But there should also be a non-GHC module that exports the full Floating class.

Am 17.04.2014 23:02, schrieb Edward Kmett:
I'm not advocating that we use the opcodes themselves.
That is pretty much tantamount to numerical precision suicide as most of them have pretty arcane restrictions on their valid input ranges that vary by platform and expect you to play games with scaling or newton-raphson as well. You need a fair bit of wrapping around the available floating point opcodes.
e.g. in glibc on x86 expm1 looks like:
.text
ENTRY(__expm1)
fldl4(%esp)// x
fxam// Is NaN or +-Inf?
fstsw%ax
movb$0x45, %ch
andb%ah, %ch
cmpb$0x40, %ch
je3f// If +-0, jump.
#ifdefPIC
LOAD_PIC_REG (dx)
#endif
cmpb$0x05, %ch
je2f// If +-Inf, jump.
fldtMO(l2e)// log2(e) : x
fmulp// log2(e)*x
fld%st// log2(e)*x : log2(e)*x
frndint// int(log2(e)*x) : log2(e)*x
fsubr%st, %st(1)// int(log2(e)*x) : fract(log2(e)*x)
fxch// fract(log2(e)*x) : int(log2(e)*x)
f2xm1// 2^fract(log2(e)*x)-1 : int(log2(e)*x)
fscale// 2^(log2(e)*x)-2^int(log2(e)*x) : int(log2(e)*x)
fxch// int(log2(e)*x) : 2^(log2(e)*x)-2^int(log2(e)*x)
fldlMO(one)// 1 : int(log2(e)*x) : 2^(log2(e)*x)-2^int(log2(e)*x)
fscale// 2^int(log2(e)*x) : int(log2(e)*x) : 2^(log2(e)*x)-2^int(log2(e)*x)
fsubrlMO(one)// 1-2^int(log2(e)*x) : int(log2(e)*x) : 2^(log2(e)*x)-2^int(log2(e)*x)
fstp%st(1)// 1-2^int(log2(e)*x) : 2^(log2(e)*x)-2^int(log2(e)*x)
fsubrp%st, %st(1)// 2^(log2(e)*x)
I guess this is 2^(log2(e)*x)-1, otherwise the effort was unnecessary. :-) But I see that there is enough code to justify a sub-routine.

On 17 Apr 2014, at 18:15, Edward Kmett wrote:
log1p and expm1 are C standard library functions that are important for work with exponentials and logarithms.
I propose adding them to the Floating class where it is defined in GHC.Float.
+1. I don't know anything about these functions per se, but am persuaded by Edward's case that they are standard and expected by numerical analysts. My only reservation is that GHC.Float is too implementation-specific a location for them. If these functions are important, they are important enough to make standard. Let's have them in module Numeric, or Numeric.Floating, or whatever. Regards, Malcolm

+1 to the proposal as it stands.
On Thu, Apr 17, 2014 at 8:15 PM, Edward Kmett
log1p and expm1 are C standard library functions that are important for work with exponentials and logarithms.
I propose adding them to the Floating class where it is defined in GHC.Float.
We do not have to export these from Prelude. My knee-jerk reaction is that we probably shouldn't. The names are kind of awful, but are standard across the rest of the industry. We already have a precedent of not exporting clutter in the classes in the existing Data.Functor containing (<$), but it not currently coming into scope from the Prelude.
They are critical for any reasonably precise work with logarithms of values near 1, and of exponentials for small values of *x*, and it is somewhat embarrassing explaining to someone coming from the outside with a numerical background whom expects to find them to exist why we don't have them.
These arise all over the place in any work on probabilities in log-scale.
Backgrounder:
Consider
exp 0.0000003 1.000000300000045
As the argument x get small, this gets very close to 1 + x. However, that leading 1 means you've consumed most of the precision of the floating point number you are using. 6 decimal places is ~18 bits of your significand that are just gone because of bad math.
If we subtract out the leading term after it has destroyed all of our precision it is too late.
exp 0.0000003 - 1 3.0000004502817035e-7
has lost a lot of precision relative to:
expm1 0.0000003 3.000000450000045e-7
Now every decimal place we get closer to 0 doesn't destroy a decimal place of precision!
Similar issues arise with logs of probabilities near 1. If you are forced to use log, as your probability gets closer to 1 from below you throw away most of your accuracy just by encoding the argument to the function with the same kind of error rate.
Here is straw man documentation ripped from my log-domain package that is probably way too technical, but serves as a starting point for discussion. class ... => Floating a where
-- | The Taylor series for @'exp' x@ is given by -- -- @ -- 'exp' x = 1 + x + x^2/2! + x^3/3! ... -- @ -- -- When @x@ is small, the leading @1@ consumes virtually all of the available precision, -- because subsequent terms are very small. -- -- This computes: -- -- @ -- exp x - 1 = x + x^2/2! + .. -- @ -- -- For many types can afford you a great deal of additional precision if you move -- things around algebraically to provide the 1 by other means. expm1 :: Floating a => a -> a expm1 x = exp x - 1
-- | Computes @log(1 + x)@ -- -- This is away from 0 so the Taylor series is defined, but it also provides an inverse to 'expm1'. -- -- This can provide much more accurate answers for logarithms of numbers close to 1 (x near 0). -- -- @ -- log1p (expm1 x) = log (1 + exp x - 1) = log (exp x) -- @ log1p :: Floating a => a -> a log1p x = log (1 + x)
They can be given definitions in terms of the standard C library functions for the CFloat, CDouble, Float and Double, either by foreign import or adding a pair of foreign prims. Finally, here is a robust implementation for Data.Complex from the same package that properly deals with the subtleties involved in not losing precision.
expm1 x@(a :+ b) | a*a + b*b < 1, u <- expm1 a, v <- sin (b/2), w <- -2*v*v = (u*w+u+w) :+ (u+1)*sin b | otherwise = exp x - 1 {-# INLINE expm1 #-}
log1p x@(a :+ b) | abs a < 0.5 && abs b < 0.5, u <- 2*a+a*a+b*b = log1p (u/(1+sqrt (u+1))) :+ atan2 (1 + a) b | otherwise = log (1 + x) {-# INLINE log1p #-}
Discussion Period: 2 weeks
-Edward Kmett
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

I can verify that these functions are essential for numerical work and that, though horrible, the names are standard and expected. In the logfloat package I link to libc in order to use them for precisely this reason. However, I'm not entirely sure how I feel about the proposal. There are a number of other fused algorithms like log1mexp and log1pexp[1] which are related to log1p/expm1, or like fma which often has cpu support. These fused algorithms are just as important for numerical work, but not included in the proposal. From the numerical side of things I'm all for throwing in the whole kitchen sink, though from a type class perspective I'm not sure. The real benefit of including these things in the core libraries, IMO, would be to make them primops which would reduce the overhead of using them. The type class layer above the primops is just (much needed) sugar. Also, as Scott Turner mentions, from the numerical perspective I'd much rather know that a type has a correct (i.e., precise) implementation of these fused algorithms than to just have a default implementation which may be imprecise. Though this is more a matter of taste, either way. [1] http://cran.r-project.org/web/packages/Rmpfr/vignettes/log1mexp-note.pdf -- Live well, ~wren
participants (18)
-
Aleksey Khudyakov
-
Anthony Cowley
-
Brandon Allbery
-
Carter Schonwald
-
Casey McCann
-
Daniel Trstenjak
-
Edward Kmett
-
Gershom Bazerman
-
Gregory Collins
-
Henning Thielemann
-
Jacques Carette
-
Johan Tibell
-
John Lato
-
Malcolm Wallace
-
Michael Snoyman
-
Scott Turner
-
Tobias Florek
-
wren romano