On Tue, Sep 10, 2013 at 6:14 PM, Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
On 10 September 2013 22:49, Brandon Allbery <allbery.b@gmail.com> wrote:
> On Tue, Sep 10, 2013 at 5:11 PM, Oscar Benjamin <oscar.j.benjamin@gmail.com>
> wrote:
>>
>> What do you mean when you say that floating point can't be capture in
>> a simple functional description?
>
> *You* try describing the truncation behavior of Intel FPUs (they use 80 bits
> internally but only store 64, for (double)). "Leaving aside" isn't an
> option; it's visible in the languages that use them.

However, for the same CPU and the same pair of inputs floatadd(A, B)
returns the same result right? The result may differ from one CPU to

In isolation, probably. When combined with other operations, it depends on optimization and the other operations.

through computation). What is it about functional programming
languages that makes this difficult as you implied earlier?
 
Only the expectation differs; programmers in e.g. C generally ignore such things, although there are obscure compiler options that try to control what happens. And C doesn't promise much about the behavior anyway. In pure functional programming, people get used to things behaving in nice theoretically characterized ways... and then they run into the bit size limit on Int or the somewhat erratic behavior of Float and Double and suddenly the nice abstractions fall apart.

--
brandon s allbery kf8nh                               sine nomine associates
allbery.b@gmail.com                                  ballbery@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net