
On Sun, Apr 20, 2014 at 8:25 PM, Edward Kmett
On Sun, Apr 20, 2014 at 10:49 PM, John Lato
wrote: On Sun, Apr 20, 2014 at 6:37 PM, Edward Kmett
wrote: On Apr 20, 2014, at 8:49 PM, John Lato
wrote: With defaults you are never worse off than you are today, but defaults
you *always* have to worry about whether you should use them.
This isn't correct. Today, I don't have exp1m. I have no expectation that any type will use an appropriate fused algorithm, nor that I won't lose precision. If exp1m is defined with defaults as proposed, and I use exp1m for a type that doesn't define it, I may lose precision, leading to compounding errors in my code, *even though I used the right function*.
I'm coming at this from the perspective that I should never be worse off having called expm1 than I would be in the world before it existed. Your way I just crash making me much worse off. I'm asking for extra bits of precision if the type I'm using can offer them. Nothing more.
And I'm saying that if you ask for extra bits of precision, and the type could offer them but doesn't, a crash is better than not giving extra precision. FP algorithms can be highly sensitive to precision, and it's a good bet that if somebody is asking for specialized behavior there's a reason why. I think it's better to fail loudly and point a finger than to fail silently.
If I'm using log1p because my algorithm requires that precision, replacing log1p with log (1+x) is not a safe transformation. But that's what your default instance does.
I use log1p as a better log (1 + x).
It lets me pick up a few decimal places worth of accuracy opportunistically.
That doesn't address my point, which is that for users who call log1p, replacing it with log (1+x) is not safe. Providing some opportunistic accuracy seems less important than giving wrong answers to other users. I know for me personally it would force me to double the amount of
numeric code I write, just to maximize my audience. I really don't want to go there. I just want to be able to call the function I mean, and to be able to talk to the right people to make it do the right thing.
exp1m = error "Go bug some library author to implement exp1m"
would accomplish that even more efficiently, since it will directly point users to the right people.
And in exchange, ever library author even the vast majority of whom will never have a user who cares about this feature needs to care or get a warning or we silently cover up a real error that should be a warning behind their back, and no user can trust that it is safe to call the function.
I think just providing implementations for Float/Double will cover >90% of use cases and convince users that it's safe to call the function. GND will probably cover another 5-8% of uses. I think it's a quite small tail we're discussing here. And I'll even admit that, since for *some* types log1p x = log (1+x) will work correctly, it's an even smaller group of users I'm concerned about. But I still think it's an unreasonable price to pay.
The main cases that are left over are libraries like linear or vector-space that provide floating instances for vector space types, and the Complex case which I explicitly covered in the original proposal.
Given that, your numbers leave ~2% of the libraries that would have no worse performance than they have today and which can be talked to individually, vs. a situation where I have to live in fear of calling a combinator for fear that I'll bottom out in my code with no way to detect it until runtime.
You keep saying that these cases are no worse off than they are today. As I see it, today everything works as expected, and under this proposal they'll have a function that gives an incorrect answer. Wouldn't that make them worse off? John