
Hi, If I have a Haskell wrapper (with unsafe...) over a function that's never going to return different values and is always side-effect free, but can change depending on compile time options of its library; my program is running, and then the version of my library is updated by my distribution smart instalation system, which does update versions of libraries in use; is it possible that I get a wrong behavior of my program? I do not understand enough about package management to understand how running programs or libraries are updated, and less about how linking works between Haskelll and libraries on other languages, so I don't know if my program is guaranteed to stay with a single version of a library for each run. (Sure this is a weird situation, but I do like to think about worst cases.) Thanks, Mauríco

Mauricio wrote:
Hi,
If I have a Haskell wrapper (with unsafe...) over a function that's never going to return different values and is always side-effect free, but can change depending on compile time options of its library; my program is running, and then the version of my library is updated by my distribution smart instalation system, which does update versions of libraries in use; is it possible that I get a wrong behavior of my program?
I do not understand enough about package management to understand how running programs or libraries are updated, and less about how linking works between Haskelll and libraries on other languages, so I don't know if my program is guaranteed to stay with a single version of a library for each run.
(Sure this is a weird situation, but I do like to think about worst cases.)
In practice that is fine, with current RTSes and so on. In principle it's not fine. A 'constant' should be constant over all time, not just constant over a particular library version or sub-version or a particular program invocation or OS or.... Who knows, maybe some future haskell runtime will be able to perform the trickery you describe and will cause this to break ;) Jules

On Tue, Oct 14, 2008 at 02:05:02PM +0100, Jules Bean wrote:
Mauricio wrote:
If I have a Haskell wrapper (with unsafe...) over a function that's never going to return different values and is always side-effect free, but can change depending on compile time options of its library; my program is running, and then the version of my library is updated by my distribution smart instalation system, which does update versions of libraries in use; is it possible that I get a wrong behavior of my program?
If you're talking aout an FFI function, the best option (in my opinion) would be to not import it in the IO monad, if possible. Which you couldn't do if it does something like allocate a C string... but in many cases this would be the cleanest approach.
I do not understand enough about package management to understand how running programs or libraries are updated, and less about how linking works between Haskelll and libraries on other languages, so I don't know if my program is guaranteed to stay with a single version of a library for each run.
(Sure this is a weird situation, but I do like to think about worst cases.)
In practice that is fine, with current RTSes and so on.
In principle it's not fine. A 'constant' should be constant over all time, not just constant over a particular library version or sub-version or a particular program invocation or OS or....
No, constants don't have to constant over all time. e.g. it's perfectly fine for compilers to implement System.Info, whose sole purpose to provide constants that are different in different library versions and OSs. http://haskell.org/ghc/docs/latest/html/libraries/base/System-Info.html#v:os
Who knows, maybe some future haskell runtime will be able to perform the trickery you describe and will cause this to break ;)
No, that sort of trickery breaks referential transparency, and itself is unsafe. It is in fact safe to assume that the RTS will never break referential transparency. David

David Roundy wrote:
(Sure this is a weird situation, but I do like to think about worst cases.) In practice that is fine, with current RTSes and so on.
In principle it's not fine. A 'constant' should be constant over all time, not just constant over a particular library version or sub-version or a particular program invocation or OS or....
No, constants don't have to constant over all time. e.g. it's perfectly fine for compilers to implement System.Info, whose sole purpose to provide constants that are different in different library versions and OSs.
http://haskell.org/ghc/docs/latest/html/libraries/base/System-Info.html#v:os
I entirely disagree. That API is broken. All those things should be in the IO monad. I might have code which migrates at runtime between different OSes. Of course i can't, and even if I did, it would probably return something different like 'virtual haskell migration pseudo-OS', but that's not the point. Constants are mathematical and universal, like pi. That is what the semantics of haskell say. However, I don't claim this is terribly important. Or even a very interesting debate ;) Jules

On Tue, Oct 14, 2008 at 04:05:23PM +0100, Jules Bean wrote:
David Roundy wrote:
(Sure this is a weird situation, but I do like to think about worst cases.) In practice that is fine, with current RTSes and so on.
In principle it's not fine. A 'constant' should be constant over all time, not just constant over a particular library version or sub-version or a particular program invocation or OS or....
No, constants don't have to constant over all time. e.g. it's perfectly fine for compilers to implement System.Info, whose sole purpose to provide constants that are different in different library versions and OSs.
http://haskell.org/ghc/docs/latest/html/libraries/base/System-Info.html#v:os
I entirely disagree.
That API is broken. All those things should be in the IO monad.
I might have code which migrates at runtime between different OSes. Of course i can't, and even if I did, it would probably return something different like 'virtual haskell migration pseudo-OS', but that's not the point.
Constants are mathematical and universal, like pi. That is what the semantics of haskell say.
Where do the semantics of haskell say this? How does it interact with fixing bugs (which means changing mathematical and universal constant functions--since all functions are constants)? David

Where do the semantics of haskell say this? How does it interact with fixing bugs (which means changing mathematical and universal constant functions--since all functions are constants)?
What semantics of haskell?-) But if there was one, it might not talk about separate compilation (it should, though), or packages. And if you consider cross-package inlining, separate package updates, or dynamic linking, etc, you might want to flag all variable constants as {-# NOINLINE c #-}, and perhaps even switch of any CSE-style optimizations. The usual bag of tricks for unsafePerformIO constants. Claus

On Tue, Oct 14, 2008 at 04:39:38PM +0100, Claus Reinke wrote:
Where do the semantics of haskell say this? How does it interact with fixing bugs (which means changing mathematical and universal constant functions--since all functions are constants)?
What semantics of haskell?-) But if there was one, it might not talk about separate compilation (it should, though), or packages. And if you consider cross-package inlining, separate package updates, or dynamic linking, etc, you might want to flag all variable constants as {-# NOINLINE c #-}, and perhaps even switch of any CSE-style optimizations. The usual bag of tricks for unsafePerformIO constants.
But in this case you'd also have to use the same flags for any functions you might want to edit later. It's always the job of the compiler and/or build system to ensure that the compile is consistent. But claiming that it's wrong to edit your source code is downright silly, even if it is true that the compiler could then be more aggresive in its optimizations... David

David Roundy wrote:
On Tue, Oct 14, 2008 at 04:05:23PM +0100, Jules Bean wrote:
David Roundy wrote: Constants are mathematical and universal, like pi. That is what the semantics of haskell say.
Where do the semantics of haskell say this?
You should better ask 'which semantics?'. The semantics in which a value of type "Int -> Int" is denoted by a mathematical function from Int to Int. In that semantics a value of type "Int" denotes a specific Int. And that denotation is, of course, entirely independent of compiler or OS or package or dynamic loading or any concern like that. This is, to my mind the "often assumed but never written down" semantics of haskell. It's certainly the semantics *I* want haskell to have.
How does it interact with fixing bugs (which means changing mathematical and universal constant functions--since all functions are constants)?
That's fine. Changing a program changes it denotation. Running a program on a different interpreter or compiler had better not change its denotation, otherwise it [the denotation] is not much use as a basis for reasoning. Jules

On Tue, Oct 14, 2008 at 05:20:35PM +0100, Jules Bean wrote:
David Roundy wrote:
On Tue, Oct 14, 2008 at 04:05:23PM +0100, Jules Bean wrote:
Constants are mathematical and universal, like pi. That is what the semantics of haskell say.
Where do the semantics of haskell say this?
You should better ask 'which semantics?'.
The semantics in which a value of type "Int -> Int" is denoted by a mathematical function from Int to Int. In that semantics a value of type "Int" denotes a specific Int. And that denotation is, of course, entirely independent of compiler or OS or package or dynamic loading or any concern like that.
This is, to my mind the "often assumed but never written down" semantics of haskell. It's certainly the semantics *I* want haskell to have.
How does it interact with fixing bugs (which means changing mathematical and universal constant functions--since all functions are constants)?
That's fine. Changing a program changes it denotation.
Running a program on a different interpreter or compiler had better not change its denotation, otherwise it [the denotation] is not much use as a basis for reasoning.
But you're saying above that we can't change programs, right? You probably won't be surprised to hear that different compilers are different programs. And different packages are also different programs. Are you the only one who's allowed to fix bugs? I personally feel that it's a valuable feature that we're allowed to implement distinct libraries with a shared API, which is a feature that you claim violates the semantics you want haskell to have. I would say that putting constants that are known at compile time into the IO monad is an abuse of the IO monad, since those constants have nothing to do with IO, and are in fact *constants* which cannot vary at run time. David

On Tue, 2008-10-14 at 12:31 -0400, David Roundy wrote:
On Tue, Oct 14, 2008 at 05:20:35PM +0100, Jules Bean wrote:
David Roundy wrote:
On Tue, Oct 14, 2008 at 04:05:23PM +0100, Jules Bean wrote:
Constants are mathematical and universal, like pi. That is what the semantics of haskell say.
Where do the semantics of haskell say this?
You should better ask 'which semantics?'.
The semantics in which a value of type "Int -> Int" is denoted by a mathematical function from Int to Int. In that semantics a value of type "Int" denotes a specific Int. And that denotation is, of course, entirely independent of compiler or OS or package or dynamic loading or any concern like that.
This is, to my mind the "often assumed but never written down" semantics of haskell. It's certainly the semantics *I* want haskell to have.
How does it interact with fixing bugs (which means changing mathematical and universal constant functions--since all functions are constants)?
That's fine. Changing a program changes it denotation.
Running a program on a different interpreter or compiler had better not change its denotation, otherwise it [the denotation] is not much use as a basis for reasoning.
But you're saying above that we can't change programs, right? You probably won't be surprised to hear that different compilers are different programs.
This `problem' is already solved by the theory of logical relations. jcc

Jonathan Cast
David Roundy wrote:
Jules Bean wrote:
David Roundy wrote:
How does it interact with fixing bugs (which means changing mathematical and universal constant functions--since all functions are constants)?
That's fine. Changing a program changes it denotation.
Running a program on a different interpreter or compiler had better not change its denotation, otherwise it [the denotation] is not much use as a basis for reasoning.
But you're saying above that we can't change programs, right? You probably won't be surprised to hear that different compilers are different programs.
This `problem' is already solved by the theory of logical relations.
Could you say more about this? -- _jsn

David Roundy wrote:
On Tue, Oct 14, 2008 at 05:20:35PM +0100, Jules Bean wrote:
Running a program on a different interpreter or compiler had better not change its denotation, otherwise it [the denotation] is not much use as a basis for reasoning.
But you're saying above that we can't change programs, right? You probably won't be surprised to hear that different compilers are different programs. And different packages are also different programs. Are you the only one who's allowed to fix bugs?
No. I think we must be at cross purposes. I'm saying that we can change programs, and that changes their denotation, and that's fine, and anyone can do that. But the denotation of a program is supposed to be something independent of a particular compiler or OS or MAC address or RAM size or any of the millions of other things which probably don't change during the single run of a program. Putting these things into the IO monad is not an abuse of the IO monad. It is simply an acknowledgement that they are runtime things, and not denotational constructs. Jules

On Tue, Oct 14, 2008 at 2:14 PM, Jules Bean
David Roundy wrote:
On Tue, Oct 14, 2008 at 05:20:35PM +0100, Jules Bean wrote:
Running a program on a different interpreter or compiler had better not change its denotation, otherwise it [the denotation] is not much use as a basis for reasoning.
But you're saying above that we can't change programs, right? You probably won't be surprised to hear that different compilers are different programs. And different packages are also different programs. Are you the only one who's allowed to fix bugs?
No. I think we must be at cross purposes.
I'm saying that we can change programs, and that changes their denotation, and that's fine, and anyone can do that. But the denotation of a program is supposed to be something independent of a particular compiler or OS or MAC address or RAM size or any of the millions of other things which probably don't change during the single run of a program.
Putting these things into the IO monad is not an abuse of the IO monad. It is simply an acknowledgement that they are runtime things, and not denotational constructs.
Jules _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
But they aren't runtime constructs, they are compile-time constructs, just like how (+) has a different meaning when you are on Windows than it does when you are on Linux: they have different ways of allocating memory, adding two numbers, putting the result somewhere, etc. Really, they still denote the same thing ("add two numbers"), they just abstract over system-dependent details. Alex

Jules Bean wrote:
I'm saying that we can change programs, and that changes their denotation, and that's fine, and anyone can do that. But the denotation of a program is supposed to be something independent of a particular compiler or OS or MAC address or RAM size or any of the millions of other things which probably don't change during the single run of a program.
And native Int size. Should all Int's be in the IO Monad? Should all floating point numerals be in the IO Monad? Does your proposal push all non-portability into IO, making the non-IO subset of Haskell a portable virtual machine specification? Regards, John

On Wednesday 15 October 2008 05:21:04 John Dorsey wrote:
Should all floating point numerals be in the IO Monad?
I'm deviating from the thread's topic, but I tend to agree with this one. Maybe not IO directly, but some kind of STM-style monad, at least (that is, FP operations are composable but ultimately they must be evaluated in IO). Floating point operations, at least by IEEE754, depend on environmental settings like the current rounding mode. They may modify state, like the sticky bits that indicate an exception occurred. They may jump nonlocally if a trap handler has been enabled. None of these help in making an expression like (a + b) + c == a + (b + c) :: Bool any more referentially transparent than getChar : getChar : [] :: [Char] would be if it was legal. Anyway, enough rant for tonight. Sorry for the hijack. We now resume our regular transmissions. -- Ariel J. Birnbaum

On Thu, 2008-10-16 at 01:24 +0200, Ariel J. Birnbaum wrote:
On Wednesday 15 October 2008 05:21:04 John Dorsey wrote:
Should all floating point numerals be in the IO Monad?
I'm deviating from the thread's topic, but I tend to agree with this one. Maybe not IO directly, but some kind of STM-style monad, at least (that is, FP operations are composable but ultimately they must be evaluated in IO).
Floating point operations, at least by IEEE754, depend on environmental settings like the current rounding mode. They may modify state, like the sticky bits that indicate an exception occurred. They may jump nonlocally if a trap handler has been enabled.
It is an interesting question: can IEEE floating point be done purely while preserving the essential features. I've not looked very far so I don't know how far people have looked into this before. Haskell currently only supports a subset of the IEEE FP api. One can assume that that's mainly because the native api for the extended features is imperative. But need it be so? Rounding modes sound to me like an implicit parameter. Traps and exception bits sound like the implementation mechanism of a couple standard exception handling strategies. The interesting thing here is that the exception handling strategy is actually an input parameter. So part of the issue is a functional model of the FP api but the other part is what compiler support would be needed to make a functional api efficient. For example if the rounding mode is an implicit parameter to each operation like + - * etc then we need some mechanism to make sure that we don't have to actually set the FP rounding mode before each FP instruction, but only at points where we know it can change, like passing a new value for the implicit parameter, or calling into a thunk that uses FP instructions. There's also the issue that if the Float/Double operations take an implicit parameter, can they actually be instances of Num? Is that allowed? I don't know. Duncan

On 2008 October 16 Thursday, Duncan Coutts wrote:
On Thu, 2008-10-16 at 01:24 +0200, Ariel J. Birnbaum wrote:
Floating point operations, at least by IEEE754, depend on environmental settings like the current rounding mode. They may modify state, like the sticky bits that indicate an exception occurred.
It is an interesting question: can IEEE floating point be done purely while preserving the essential features.
The trouble is that the best numerical algorithms have been written using the imperative-style IEEE operations for more than 20 years. If Haskell had a floating point monad, then those algorithms could be coded in Haskell. But that doesn't seem like an interesting and fruitful approach. Haskell can access those algorithms using FFI. The test of making IEEE floating point accessible in pure Haskell code is whether it stirs any interest in the numerical analysis community.

It is an interesting question: can IEEE floating point be done purely while preserving the essential features. I've not looked very far so I don't know how far people have looked into this before. Not sure. My doubts are mainly on interference between threads. If a thread can keep its FP state changes 'local' then maybe it could be done. I still think FP operations should be combined in a monad though --- after all, they depend on the evaluation order.
Haskell currently only supports a subset of the IEEE FP api. One can assume that that's mainly because the native api for the extended features is imperative. But need it be so?
Rounding modes sound to me like an implicit parameter. Traps and exception bits sound like the implementation mechanism of a couple standard exception handling strategies. The interesting thing here is that the exception handling strategy is actually an input parameter. Reader? =)
So part of the issue is a functional model of the FP api but the other part is what compiler support would be needed to make a functional api efficient. For example if the rounding mode is an implicit parameter to each operation like + - * etc then we need some mechanism to make sure that we don't have to actually set the FP rounding mode before each FP instruction, but only at points where we know it can change, like passing a new value for the implicit parameter, or calling into a thunk that uses FP instructions. This one seems like a tough one to figure. Again, I'd vouch for a solution like STM --- composition of FP operations is allowed at a certain level (maybe enforcing some settings to remain constant here), while it takes a stronger level to connect them to their surroundings.
There's also the issue that if the Float/Double operations take an implicit parameter, can they actually be instances of Num? Is that allowed? I don't know. Technically I guess they could, just like (Num a) => (b->a) can be made an instance. It would look more like State though, IMO. Or Cont. Doesn't even look like Oleg-fu is needed.
Should they? That's a horse of a different colour. There are certain properties most programmers come to expect of such instances (regardless of whether the Report demands them or not), such as associativity of (+) and (==) being an equivalence that break miserably for floating point. Floating point is a hairy area for programing in general, but I think it's also one where Haskell can shine with an elegant, typesafe model. -- Ariel J. Birnbaum

I'd like to direct folks' attention to the IEEE-utils package on hackage
[1], which Matt Morrow started and I have made a few additions to. There are
bindings to set and check the rounding mode, as well as check and clear the
exception register. On top of that I've built a very experimental monadic
wrapper (so experimental that I just noticed a typo in the documentation).
The monad is essentially a newtype over IO, which enforces a single IEEE
state using an MVar propagated through the program as an implicit parameter
(as opposed to created with top-level unsafePerfomIO). Strictness could
probably be enforced in a more thoroughgoing fashion, but now is explicitly
introduced with "calculate" which is a lifted "evaluate."
The perturb function is pretty neat -- it uses polymorphism to prevent
memoization, such that the same pure calculation can be performed over
different rounding modes, to test for numeric stability. I couldn't think of
a sane way to deal with fancier traps to the IEEE registers, but obviously a
slower but sane implementation of exception traps could be built on top of
the existing machinery. With a bit of duct-tape, perturb could no doubt be
combined with quickcheck to prove some relatively interesting properties.
Matt and I did this mainly out of curiosity and to fill a gap, as neither of
us has a real need for this sort of control over IEEE state at the moment.
As such, I don't have a good idea of what is good or bad in the API or could
be more convenient. However, I'd urge folks with an itch to scratch to give
this all a try and maybe provide some feedback, use-cases, implementations
of algorithms that need this sort of thing, of course patches, etc.
[1]
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/ieee-utils-0.4.0
Cheers,
Sterl.
On Fri, Oct 17, 2008 at 11:19 AM, Ariel J. Birnbaum
It is an interesting question: can IEEE floating point be done purely while preserving the essential features. I've not looked very far so I don't know how far people have looked into this before. Not sure. My doubts are mainly on interference between threads. If a thread can keep its FP state changes 'local' then maybe it could be done. I still think FP operations should be combined in a monad though --- after all, they depend on the evaluation order.
Haskell currently only supports a subset of the IEEE FP api. One can assume that that's mainly because the native api for the extended features is imperative. But need it be so?
Rounding modes sound to me like an implicit parameter. Traps and exception bits sound like the implementation mechanism of a couple standard exception handling strategies. The interesting thing here is that the exception handling strategy is actually an input parameter. Reader? =)
So part of the issue is a functional model of the FP api but the other part is what compiler support would be needed to make a functional api efficient. For example if the rounding mode is an implicit parameter to each operation like + - * etc then we need some mechanism to make sure that we don't have to actually set the FP rounding mode before each FP instruction, but only at points where we know it can change, like passing a new value for the implicit parameter, or calling into a thunk that uses FP instructions. This one seems like a tough one to figure. Again, I'd vouch for a solution like STM --- composition of FP operations is allowed at a certain level (maybe enforcing some settings to remain constant here), while it takes a stronger level to connect them to their surroundings.
There's also the issue that if the Float/Double operations take an implicit parameter, can they actually be instances of Num? Is that allowed? I don't know. Technically I guess they could, just like (Num a) => (b->a) can be made an instance. It would look more like State though, IMO. Or Cont. Doesn't even look like Oleg-fu is needed.
Should they? That's a horse of a different colour. There are certain properties most programmers come to expect of such instances (regardless of whether the Report demands them or not), such as associativity of (+) and (==) being an equivalence that break miserably for floating point.
Floating point is a hairy area for programing in general, but I think it's also one where Haskell can shine with an elegant, typesafe model.
-- Ariel J. Birnbaum _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hi,
If I have a Haskell wrapper (with unsafe...) over a function that's never going to return different values and is always side-effect free, but can change depending on compile time options of its library; my program is running, and then the version of my library is updated by my distribution smart instalation system, which does update versions of libraries in use; is it possible that I get a wrong behavior of my program?
I do not understand enough about package management to understand how running programs or libraries are updated, and less about how linking works between Haskelll and libraries on other languages, so I don't know if my program is guaranteed to stay with a single version of a library for each run.
(Sure this is a weird situation, but I do like to think about worst cases.)
In practice that is fine, with current RTSes and so on.
In principle it's not fine. A 'constant' should be constant over all time, not just constant over a particular library version or sub-version or a particular program invocation or OS or....
Who knows, maybe some future haskell runtime will be able to perform the trickery you describe and will cause this to break ;)
What I actually want to use that way are build time configs. For instance, 'isThisLibraryThreadSafe' or 'maximumNumberOfBigObjects'. Actually, I don't know why people allow build time options at all. We always use the "best set of options", and the alternatives are there just to compel us to check for them :) Maurício

Mauricio wrote:
What I actually want to use that way are build time configs. For instance, 'isThisLibraryThreadSafe' or 'maximumNumberOfBigObjects'. Actually, I don't know why people allow build time options at all. We always use the "best set of options", and the alternatives are there just to compel us to check for them :)
Maurício
Why not just have a Haskell module (that defines a number of CAFs) be your build-time config? Unless there's an important reason to be using the FFI, this seems like a much simpler approach. Xmonad and Yi use a similar technique even for load-time constant configurations. -- Live well, ~wren
participants (13)
-
Alexander Dunlap
-
Ariel J. Birnbaum
-
Claus Reinke
-
David Roundy
-
Duncan Coutts
-
Jason Dusek
-
John Dorsey
-
Jonathan Cast
-
Jules Bean
-
Mauricio
-
Scott Turner
-
Sterling Clover
-
wren ng thornton