Re: [Haskell-cafe] Re: A question about "monad laws"

Hello, On a tangent, probably: On Thursday 14 February 2008 10:24, Roman Leshchinskiy wrote:
... Hmm. Personally, I've never seen an algorithm where comparing for exact equality was algorithmically necessary. Sometimes (rarely) it is acceptable but necessary? Do you know of one?
Finding the "machine epsilon", perhaps, that is, the smallest (floating point, surely) number for which 1.0+machine_eps==1.0 would be a candidate?
...
Best regards Thorkil

G'day all.
Quoting Thorkil Naur
Finding the "machine epsilon", perhaps, that is, the smallest (floating point, surely) number for which 1.0+machine_eps==1.0 would be a candidate?
The machine epsilon is the smallest relative separation between two adjacent normalised floating point numbers. (The largest is the machine epsilon multiplied by the radix, more or less.) So as I understand it, if you're thinking relative error, this test: (fabs(x1-x2) < machine_eps * fabs(x1)) should be true if and only if x1 == x2, assuming that x1 and x2 are nonzero and normalised. I've always had the impression that using the machine epsilon for pseudo-equality testing is fairly useless, especially if you can work out a meaningful problem-specific tolerance. What seems to be more useful is using the machine epsilon to compute an estimate of how much relative error your algorithm accumulates. I've seen this in a lot of Serious Numeric Code(tm), and I've done it myself (probably inexpertly) a few times. I haven't tried this, but I imagine that a computed relative error estimate could be useful for assisting your approximate-equality tests under some circumstances. Richard might know of some circumstances where this sort of thing would be useful. Cheers, Andrew Bromage
participants (2)
-
ajb@spamcop.net
-
Thorkil Naur