
On 23/09/2011, at 4:06 PM, Chris Smith wrote:
On Fri, 2011-09-23 at 11:02 +1200, Richard O'Keefe wrote:
I do think that '..' syntax for Float and Double could be useful, but the actual definition is such that, well, words fail me. [1.0..3.5] => [1.0,2.0,3.0,4.0] ???? Why did anyone ever think _that_ was a good idea?
In case you meant that as a question, the reason is this:
Prelude> [0.1, 0.2 .. 0.3] [0.1,0.2,0.30000000000000004]
That shows why it is a *BAD* idea. 0.3 comes out as 0.29999999999999998890 so the final value is clearly and unambiguously *outside* the requested range.
Because of rounding error, an implementation that meets your proposed law would have left out 0.3 from that sequence, when of course it was intended to be there.
But the output shown does NOT include 0.3 in the sequence. 0.3 `elem` [0.1, 0.2 .. 0.3] is False.
This is messy for the properties you want to state, but it's almost surely the right thing to do in practice.
I flatly deny that. I have access to several programming languages that offer 'REAL DO', including Fortran, R, and Smalltalk. They all do the same thing; NONE of them overshoots the mark. If I *wanted* the range to be enlarged a little bit, I would enlarge it myself: [0.1, 0.2 .. 0.3+0.001] perhaps.
If the list is longer, then the most likely way to get it right is to follow the behavior as currently specified.
I don't see the length of the list as having much relevance; if the bug shows up in a list of length 3, it is clearly not likely to be any better for longer lists. This is NOT by any stretch of the imagination, it is a BUG. If you have used REAL DO in almost any other programming language, you will be shocked and dismayed by its behaviour in Haskell. Programming constructs that are implemented to do what would probably meant if you were an idiot instead of what you *asked* for are dangerous.
If you can clear this up with a better explanation of the properties, great! But if you can't, then we ought to reject the kind of thinking that would remove useful behavior when it doesn't fit some theoretical properties that looked nice until you consider the edge cases.
I don't see any useful behaviour here. I see an implausibly motivated bug and while I _have_ written REAL DO in the past (because some languages offer only one numeric type), I cannot imagine wishing to do so in Haskell, thanks to this bug. What I want now is a compiler option, on by default, to assure me that I am *not* using floating point numeration in Haskell.