
Let's try taking a step back here.
There are clearly two different audiences in mind here and that the
parameters for debate here are too narrow for us to achieve consensus.
Maybe we can try changing the problem a bit and see if we can get there by
another avenue.
Your audience would wants a claim that these functions do everything in
their power to preserve accuracy.
My audience wants to be able to opportunistically grab accuracy without
leaking it into the type and destroying the usability of their libraries
for the broadest set of users.
I essence here it is your audience is the one seeking an extra
guarantee/law.
Extra guarantees are the sort of thing one often denotes through a class.
However, putting them in a separate class destroys the utility of this
proposal for me.
As a straw-man / olive-branch / half-baked idea:
Could we get you what you want by simply making an extra class to indicate
the claim that the guarantee holds, and get what I want by placing these
methods in the existing Floating with the defaults?
I rather don't like that solution, but I'm just trying to broaden the scope
of debate, and at least expose where the fault lines lie in the design
space, and find a way for us to stop speaking past each other.
-Edward
On Wed, Apr 23, 2014 at 11:38 PM, Edward Kmett
On Wed, Apr 23, 2014 at 8:16 PM, John Lato
wrote: Ah. Indeed, that was not what I thought you meant. But the user may not be compiling DodgyFloat; it may be provided via apt/rpm or similar.
That is a fair point.
Could you clarify one other thing? Do you think that \x -> log (1+x) behaves the same as log1p?
I believe that \x -> log (1 + x) is a passable approximation of log1p in the absence of a better alternative and that I'd rather have the user get something no worse than they get today if they refactored their code to take advantage of the extra capability we are exposing, than just wind up in a situation where they have to choose between trying to use it because the types say they should be able to call it and getting unexpected bottoms they can't protect against, so that the new functionality can't be used in a way that can be detected at compile time.
At this point we're just going around in circles.
Under your version of things we put them into a class in a way that everyone has to pay for it, and nobody including me gets to have enough faith that it won't crash when invoked to actually call it.
-Edward
On Wed, Apr 23, 2014 at 5:08 PM, Edward Kmett
wrote: I think you may have interpreted me as saying something I didn't try to say.
To clarify, what I was indicating was that during the compilation of your 'DodgyFloat' supplying package a bunch of warnings about unimplemented methods would scroll by.
-Edward
On Wed, Apr 23, 2014 at 8:06 PM, Edward Kmett
wrote: This does work.
MINIMAL is checked based on the definitions supplied locally in the instance, not based on the total definitions that contribute to the instance.
Otherwise we couldn't have the very poster-chid example of this from the documentation for MINIMAL
class Eq a where (==) :: a -> a -> Bool (/=) :: a -> a -> Bool x == y = not (x /= y) x /= y = not (x == y) {-# MINIMAL (==) | (/=) #-}
On Wed, Apr 23, 2014 at 7:57 PM, John Lato
wrote: There's one part of this alternative proposal I don't understand:
On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett
wrote: * If you can compile sans warnings you have nothing to fear. If you do get warnings, you can know precisely what types will have degraded back to the old precision at *compile* time, not runtime.
I don't understand the mechanism by which this happens (maybe I'm misunderstanding the MINIMAL pragma?). If a module has e.g.
import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package, doesn't implement log1p etc.
x = log1p 1e-10 :: DodgyFloat
I don't understand why this would generate a warning (i.e. I don't believe it will generate a warning). So the user is in the same situation as with the original proposal.
John L.
On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov < alexey.skladnoy@gmail.com> wrote:
> On 21 April 2014 09:38, John Lato
wrote: > > I was just wondering, why not simply numerically robust algorithms > as > > defaults for these functions? No crashes, no errors, no loss of > precision, > > everything would just work. They aren't particularly complicated, > so the > > performance should even be reasonable. > > > I think it's best option. log1p and exp1m come with guarantees > about precision. log(1+p) default makes it impossible to depend in > such > guarantees. They will silenly give wrong answer >