
David Roundy schrieb:
Why not look for a heuristic that gets the common cases right, rather than going with an elegant wrong solution? After all, these enumerations are most often used by people who neither care nor know how they're implemented, but who most likely would prefer if haskell worked as well as matlab, python, etc.
Although MatLab has a lot of bad heuristics, they fortunately didn't try to be too clever with respect to rounding errors. Floating point enumerations have the same problems in MatLab as in all other languages.

On Wed, Oct 15, 2008 at 11:25:57PM +0200, Henning Thielemann wrote:
David Roundy schrieb:
Why not look for a heuristic that gets the common cases right, rather than going with an elegant wrong solution? After all, these enumerations are most often used by people who neither care nor know how they're implemented, but who most likely would prefer if haskell worked as well as matlab, python, etc.
Although MatLab has a lot of bad heuristics, they fortunately didn't try to be too clever with respect to rounding errors. Floating point enumerations have the same problems in MatLab as in all other languages.
I presume you say this because you haven't tried qusing matlab? I don't know what their algorithm is, but matlab gives:
sprintf('%.20f\n', (0:0.1:0.3), 0.1*3, 0.1+0.1+0.1, 0.3)
ans = 0.00000000000000000000 0.10000000000000000555 0.19999999999999998335 0.29999999999999998890 0.30000000000000004441 0.30000000000000004441 0.29999999999999998890 from which you can clearly see that matlab does have special handling for its [0,0.1..0.3] syntax. For what it's worth, octave has the same behavior: octave:1> sprintf('%.20f\n', (0:0.1:0.3), 0.1*3, 0.1+0.1+0.1, 0.3) ans = 0.00000000000000000000 0.10000000000000000555 0.20000000000000001110 0.29999999999999998890 0.30000000000000004441 0.30000000000000004441 0.29999999999999998890 I don't know what they're doing, but obviously they're doing something clever to make this common case work. They presumably use different algorithms, since octave gives a different answer for the 0.2 than matlab does. Matlab's value here is actually less that 0.1 and it's also less than 2*0.1, which is a bit odd. Both agree that the final element in the sequence is 0.3. The point being that other languages *do* put care into how they define their sequences, and I see no reason why Haskell should be sloppier. David

On Wed, 15 Oct 2008, David Roundy wrote:
On Wed, Oct 15, 2008 at 11:25:57PM +0200, Henning Thielemann wrote:
David Roundy schrieb:
Why not look for a heuristic that gets the common cases right, rather than going with an elegant wrong solution? After all, these enumerations are most often used by people who neither care nor know how they're implemented, but who most likely would prefer if haskell worked as well as matlab, python, etc.
Although MatLab has a lot of bad heuristics, they fortunately didn't try to be too clever with respect to rounding errors. Floating point enumerations have the same problems in MatLab as in all other languages.
I presume you say this because you haven't tried qusing matlab?
I had to use MatLab in the past and remembered problems with rounding errors. What you show indicates that they try to be more clever, though. But they can't make floating point numbers precise rationals.
length(0:1/10:0.9999999999999999)
ans = 11
length((0:1:9.999999999999999)/10)
ans = 10
zeros(1,10/77*77) Warning: Size vector should be a row vector with integer elements.
ans = 0 0 0 0 0 0 0 0 0 I suspect that all algorithms that try to solve problems of floating point numbers will do something unexpected in certain circumstances. So, I think it is better they do it in a way that can be predicted easily. I feel much safer with enumeration of integers which are converted to floating point numbers than using a heuristics for floating point numbers. Haskell or the underlying math library has also heuristics that I don't like. Prelude> (-1)**2 :: Double 1.0 Prelude> (-1)**(2 + 1e-15 - 1e-15) :: Double NaN So I think, (**) should be limited to positive bases.

On Thu, Oct 16, 2008 at 05:36:35PM +0200, Henning Thielemann wrote:
On Wed, 15 Oct 2008, David Roundy wrote:
On Wed, Oct 15, 2008 at 11:25:57PM +0200, Henning Thielemann wrote:
David Roundy schrieb:
Why not look for a heuristic that gets the common cases right, rather than going with an elegant wrong solution? After all, these enumerations are most often used by people who neither care nor know how they're implemented, but who most likely would prefer if haskell worked as well as matlab, python, etc.
Although MatLab has a lot of bad heuristics, they fortunately didn't try to be too clever with respect to rounding errors. Floating point enumerations have the same problems in MatLab as in all other languages.
I presume you say this because you haven't tried qusing matlab?
I had to use MatLab in the past and remembered problems with rounding errors. What you show indicates that they try to be more clever, though. But they can't make floating point numbers precise rationals.
Of course not. It doesn't mean that we shouldn't try to make the common cases work.
I suspect that all algorithms that try to solve problems of floating point numbers will do something unexpected in certain circumstances. So, I think it is better they do it in a way that can be predicted easily. I feel much safer with enumeration of integers which are converted to floating point numbers than using a heuristics for floating point numbers.
I agree that it's nice to have functions whose behavior can be predicted easily. This isn't always possible with floating point numbers due to roundoff error. The proposed change to the Prelude removes this "easy-predicting" behavior. The Prelude was written such that it was easy to predict what the result would be unless the stop value is a half-integer number of steps. This change makes behavior hard to predict when the stop value is an integer number of steps. I assert that an integer number of steps is a more common situation than a half-integer number of steps, and therefore the Haskell 98 Prelude's behavior was better. It isn't be best solution, but it's better than the proposed alternative. And with simple syntax, I think simple behavior is best--which means it should not be dependent on details of roundoff error. Another alternative would be to say import Data.Ratio ( (%) ) xxx m n p = map (scale . fromRational . (%num)) [0..num] where num = round ((p-m)/(n-m)) scale f = m*(1-f) + p*f This gives up even approximating the property that each element of the output differs by (m-n) (which was never true in any of the proposed implementations, and is impossible in any case). And in exchange, we always get a sequence with the requested beginning and ending values. It has the downside that (map fromInteger [0,2..9]) :: [Float] gives a different result from [0,2..9] :: [Float]. But we gain the new feature that (map fromInteger [1,2..10000000000000000]) :: [Float] gives the same result as [1,2..10000000000000000] :: [Float] David
participants (3)
-
David Roundy
-
Henning Thielemann
-
Henning Thielemann