Lazy in either argument?

I am trying to get my feet wet with Haskell threads with the following problem, inspired by a recent post (http://www.haskell.org/pipermail/haskell-cafe/2007-July/029408.html) saying that:
Since there's no way to have a function be lazy in both arguments, the implicit convention is to make functions strict in the first arguments and, if applicable, lazy in the last arguments. In other words, the convention is
True || _|_ = True but not _|_ || True = True
1 + _|_ = Succ _|_ but not _|_ + 1 = Succ _|_
Regards, apfelmus
Maybe not lazy in both arguments, but how about lazy in either argument? The idea is to fork a thread for each of the two functions, (||) and flip (||), pick the winner, then kill off both threads. I can wrap this up in a pure function using unsafePerformIO (given the proof obligation that the results of both threads will always be equal where both are defined). The code below seems to work, except for the following problems: 1) Commenting out the type annotation f :: Bool makes the program hang 2) If I replace f = f by f = undefined, I get an annoying print of "LazyOr: Prelude.undefined" before it returns the correct value. Does anyone know why the type annotation is needed in #1, and/or how to suppress the error message in #2? Dan Weston ----------------------------------------------------------- import Control.Monad(when) import Control.Concurrent(forkIO,killThread) import Control.Concurrent.Chan(newChan,readChan,writeChan,isEmptyChan) import Foreign(unsafePerformIO) f :: Bool f = f main = putStrLn . show $ lazyBinaryOp (||) f True lazyBinaryOp p x y = unsafePerformIO $ do c <- newChan p2 <- forkIO (lazyBinaryOpThread c p x y) p1 <- forkIO (lazyBinaryOpThread c p y x) z <- readChan c killThread p1 killThread p2 return z where lazyBinaryOpThread c p x y = do case (p x y) of True -> writeChan c True False -> writeChan c False

Dan Weston wrote:
1) Commenting out the type annotation f :: Bool makes the program hang
Turning on code optimizations (e.g., ghc -O) helps. I don't know why.
2) If I replace f = f by f = undefined, I get an annoying print of "LazyOr: Prelude.undefined" before it returns the correct value.
The error message is a manifestation of an unhandled exception. Look for exception handling tools in Control.Exception and use one to your liking. You should do this to be very general anyway, since _|_ can be infinite loops or exceptions. Beware that parallelizing the two arguments (making them compete) is still different from laziness in either argument. Laziness does not only include waiting less, but also includes leaving thunks untouched. Competition leads to waiting less certainly, but it also forces both thunks. A user may or may not want this. That said, parallel disjunction is interesting in its own right, mainly because it restores the symmetry found in logical disjunction that has been lost in short-circuiting. There was a paper using it for programming language semantics, but I have long forgotten it.

The non-termination is (probably) due to the fact that you can have
uninterruptible threads in ghc.
If a thread never allocates it will never be preempted. :(
-- Lennart
On 7/24/07, Dan Weston
I am trying to get my feet wet with Haskell threads with the following problem, inspired by a recent post (http://www.haskell.org/pipermail/haskell-cafe/2007-July/029408.html) saying that:
Since there's no way to have a function be lazy in both arguments, the implicit convention is to make functions strict in the first arguments and, if applicable, lazy in the last arguments. In other words, the convention is
True || _|_ = True but not _|_ || True = True
1 + _|_ = Succ _|_ but not _|_ + 1 = Succ _|_
Regards, apfelmus
Maybe not lazy in both arguments, but how about lazy in either argument?
The idea is to fork a thread for each of the two functions, (||) and flip (||), pick the winner, then kill off both threads. I can wrap this up in a pure function using unsafePerformIO (given the proof obligation that the results of both threads will always be equal where both are defined).
The code below seems to work, except for the following problems:
1) Commenting out the type annotation f :: Bool makes the program hang 2) If I replace f = f by f = undefined, I get an annoying print of "LazyOr: Prelude.undefined" before it returns the correct value.
Does anyone know why the type annotation is needed in #1, and/or how to suppress the error message in #2?
Dan Weston
----------------------------------------------------------- import Control.Monad(when) import Control.Concurrent(forkIO,killThread) import Control.Concurrent.Chan(newChan,readChan,writeChan,isEmptyChan) import Foreign(unsafePerformIO)
f :: Bool f = f
main = putStrLn . show $ lazyBinaryOp (||) f True
lazyBinaryOp p x y = unsafePerformIO $ do c <- newChan p2 <- forkIO (lazyBinaryOpThread c p x y) p1 <- forkIO (lazyBinaryOpThread c p y x) z <- readChan c killThread p1 killThread p2 return z
where
lazyBinaryOpThread c p x y = do case (p x y) of True -> writeChan c True False -> writeChan c False
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 7/26/07, Lennart Augustsson
The non-termination is (probably) due to the fact that you can have uninterruptible threads in ghc. If a thread never allocates it will never be preempted. :(
To elaborate on that, the different behavior between the two versions of Dan's code, one with and one without a type signature, happens because f compiles like so if the type signature isn't given (this is the STG code): f_ri5 = \u [] let-no-escape { f1_sPY = NO_CCS[] \u [] f1_sPY; } in f1_sPY; SRT(f_ri5): [] and like so if the type signature is given: f_ri5 = \u srt:(0,*bitmap*) [] f_ri5; SRT(f_ri5): [f_ri5] If you look at the resulting asm code, the second version of f_ri5 compiles to a loop that allocates on each iteration, whereas the body of the let in the first version of f_ri5 compiles to just: sPY_info: jmp sPY_info (Adding f to the export list so that its SRT is empty doesn't change anything, btw.) This is all with -Onot. So I find this a little confusing. Why is f = let f_1 = f_1 in f_1 compiled so differently from f = f? It seems like f = f should also compile to a loop that doesn't allocate anything. And from the user's perspective, it seems somewhat strange that adding or removing a type signature changes the semantics of their code (although I guess you could make the argument that if you're dealing with threads and nonterminating code, all bets are off.) Can someone better acquainted with the code generator than me explain the rationale behind this? Cheers, Tim -- Tim Chevalier* catamorphism.org *Often in error, never in doubt "Religion is just a fancy word for the Stockholm Syndrome." -- lj user="pure_agnostic"

On 7/26/07, Tim Chevalier
To elaborate on that, the different behavior between the two versions of Dan's code, one with and one without a type signature, happens because f compiles like so if the type signature isn't given (this is the STG code):
f_ri5 = \u [] let-no-escape { f1_sPY = NO_CCS[] \u [] f1_sPY; } in f1_sPY; SRT(f_ri5): []
Also (talking to myself), in the lambda-form that is the rhs of f1_sPY above, shouldn't f1_sPY be contained in the free-variable list for itself? Cheers, Tim -- Tim Chevalier* catamorphism.org *Often in error, never in doubt "Programming is like sex; one mistake and you have to support for a lifetime." -- anonymous
participants (4)
-
Albert Y. C. Lai
-
Dan Weston
-
Lennart Augustsson
-
Tim Chevalier