
On Mon, 2011-09-26 at 18:53 +0200, Lennart Augustsson wrote:
If you do [0.1, 0.2 .. 0.3] it should leave out 0.3. This is floating point numbers and if you don't understand them, then don't use them. The current behaviour of .. for floating point is totally broken, IMO.
I'm curious, do you have even a single example of when the current behavior doesn't do what you really wanted anyway? Why would you write an upper bound of 0.3 on a list if you don't expect that to be included in the result? I understand that you can build surprising examples with stuff that no one would really write... but when would you really *want* the behavior that pretends floating point numbers are an exact type and splits hairs? I'd suggest that if you write code that depends on whether 0.1 + 0.1 + 0.1 <= 0.3, for any reason other than to demonstrate rounding error, you're writing broken code. So I don't understand the proposal to change this notation to create a bunch of extra broken code. -- Chris