
Hi folks. I'm trying to write a Mandelbrot generator, and I've having a few problems with precision. First of all, currently I'm just using Double to represent coordinates. Can anybody tell me what the smallest value you can represent with one is? (Not including denormals.) (I've built a function that checks for cycles in the orbit - by looking for previous points that are sufficiently "near" to the current one. But I want to know how much precision a Double gives me so I can figure out how near is "near".) Next up, at some point I'm going to need more precision than a Double can give me. (But, obviously, I still want the thing to go as fast as humanly possible.) In particular, I do NOT want to go into "infinite" precision. (E.g., exact arithmetic with arbitrary fractions.) I want to be able to make the precision arbitrarily high, but still finite and fixed. (The idea being that the further you zoom in, the more the computer turns up the precision.) Is there anything useful for this in the standard libraries? Or will I have to write something?

G'day.
Quoting Andrew Coppin
First of all, currently I'm just using Double to represent coordinates. Can anybody tell me what the smallest value you can represent with one is? (Not including denormals.)
Remember that floating point numbers are stored in three parts. There's a sign, an exponent and a mantissa. Assuming that floatRadix n == 2, normalised numbers can be represented as: s * m * 2^e where s is 1 or -1, m is in the range [1/2,1) and stored using floatDigits n bits of precision, and e is in the range floatRange n. The reason why this is significant is that the precision of a floating point number is relative, not absolute. The relative error is measured by a number called "epsilon": -- The recursion is just so that we don't need -fglasgow-exts; feel -- free to use lexically-scoped type variables instead if you like. epsilon :: (RealFloat a) => a epsilon = let eps = encodeFloat 1 (1 - floatDigits eps) in eps epsilon is the smallest number such that 1.0 + epsilon /= 1.0. It measures the minimum possible relative separation between two adjacent normalised floating-point numbers. That is, if both a and b are normalised floating-point numbers (this also means that they're not zero), and a /= b, then abs (a-b) / a >= epsilon. Similarly, the maximum possible relative separation is epsilon * 2. (In general, epsilon * floatRadix n, but AFAIK no architectures that any Haskell implementation has been ported to has any floatRadix other than 2, so this is a safe assumption.)
(I've built a function that checks for cycles in the orbit - by looking for previous points that are sufficiently "near" to the current one. But I want to know how much precision a Double gives me so I can figure out how near is "near".)
While epsilon is the minimum relative separation of RealFloat numbers, it is usually much smaller than the error of any nontrivial bunch of operations. For something as simple as a Mandelbrot iteration, it's usually okay to simply use epsilon multiplied by some factor which represents the accumulated error of a bunch of operations. If you were doing Serious Numeric Analysis(tm), you'd go through each step and work out the possible rounding error, and come up with a tight bound. But something like this will probably do: tolerance :: Double tolerance = 100 * epsilon near :: Double -> Double -> Bool near a b | a == 0 || isDenormalized a = abs (a-b) < tolerance | otherwise = abs (a-b) < tolerance * a Cheers, Andrew Bromage

On Thu, Aug 09, 2007 at 06:47:04PM +0100, Andrew Coppin wrote:
First of all, currently I'm just using Double to represent coordinates. Can anybody tell me what the smallest value you can represent with one is? (Not including denormals.)
The smallest number is around 1e-300, but the smallest fractional difference (which is probably what's biting you) is around 1e-15.
Next up, at some point I'm going to need more precision than a Double can give me. (But, obviously, I still want the thing to go as fast as humanly possible.) In particular, I do NOT want to go into "infinite" precision. (E.g., exact arithmetic with arbitrary fractions.) I want to be able to make the precision arbitrarily high, but still finite and fixed. (The idea being that the further you zoom in, the more the computer turns up the precision.) Is there anything useful for this in the standard libraries? Or will I have to write something?
I suspect you can acheive this by changing your algorithm and data structures, rather than your floating point type. If you store relative positions rather than absolute positions, then as your fractal gets smaller and smaller, your precision should continue to be good. Danger only arises when you try to compare (or subtract) two doubles whose difference is much smaller than their magnitude. By definition, fractal geometries don't require you to do this. -- David Roundy Department of Physics Oregon State University
participants (3)
-
ajb@spamcop.net
-
Andrew Coppin
-
David Roundy