Hiding side effects in a data structure

While thinking about how to generate unique integer IDs on demand without using a state variable, I came up with an interesting design pattern. It's a way of doing side-effecting computation outside IO. Referential transparency is preserved by making the side effects spatial rather than temporal: by hiding side effects behind lazy thunks in a data structure, they can be disguised as the output of a single, apparently nondeterministic IO function used to data structure. This lets pure code use nondeterministic computation without the monadic plumbing required to maintain state. The getContents function works this way, but I came up with a more interesting example. The code below is a source of unique integer IDs that is modeled after the RandomGen class. It uses unsafeInterleaveIO under the hood, preserving referential transparency but not determinism. It seems to work. However, I'm not entirely sure how safe my use of unsafeInterleaveIO is. In particular, could the two branches of the tree get CSE'd? I'm also curious what people think about the general design pattern. module Unique where import Control.Monad(liftM) import Data.IORef import System.IO.Unsafe -- The goal is to produce an infinite tree of integers where each node in the -- tree has a unique value. type Unique = Int data Supply = Supply Unique Supply Supply -- The tree can be used in a stateful manner as a source of unique integers. getUnique :: Supply -> (Unique, Supply) getUnique (Supply u s1 _) = (u, s1) -- The tree can also be split into independent sources of unique integers. split :: Supply -> (Supply, Supply) split (Supply _ s1 s2) = (s1, s2) -- The catch is, the tree will probably be visited very sparsely, with most of -- it being skipped. Assigning every node its own integer is very bad, because -- that will waste most of the 2^32 available integers very quickly. In fact, -- it can get used up in just 32 calls to getUnique. -- -- Instead, we'll create a tree where integers magically appear only in places -- where they are actually used. -- First, we need an IO-bound supply of integers. newtype IOSupply = IOSupply (IORef Unique) newIOSupply :: IO IOSupply newIOSupply = liftM IOSupply $ newIORef 0 getUniqueIO :: IOSupply -> IO Unique getUniqueIO (IOSupply s) = do u <- readIORef s writeIORef s $ u+1 return u -- Now we'll use the IO-bound supply to create a tree having the desired -- properties. {-# NOINLINE getPureSupply #-} getPureSupply :: IOSupply -> IO Supply getPureSupply s = do s1 <- unsafeInterleaveIO $ getPureSupply s s2 <- unsafeInterleaveIO $ getPureSupply s n <- unsafeInterleaveIO $ getUniqueIO s return $ Supply n s1 s2 _________________________________________________________________ Climb to the top of the charts! Play Star Shuffle: the word scramble challenge with star power. http://club.live.com/star_shuffle.aspx?icid=starshuffle_wlmailtextlink_oct

Good idea. GHC uses it http://darcs.haskell.org/ghc/compiler/basicTypes/UniqSupply.lhs Lennart Augustsson and friends invented it @techreport{Augustsson92a, author = {L Augustsson and M Rittri and D Synek}, title = {Splitting infinite sets of unique names by hidden state changes}, type = {Report 67, Programming Methodology Group, Chalmers University}, month = may, year = {1992}, keywords = {name supply, monad plumbing, gensym, unique names} } Simon | -----Original Message----- | From: haskell-cafe-bounces@haskell.org [mailto:haskell-cafe- | bounces@haskell.org] On Behalf Of C Rodrigues | Sent: 19 October 2007 15:16 | To: haskell-cafe@haskell.org | Subject: [Haskell-cafe] Hiding side effects in a data structure | | | While thinking about how to generate unique integer IDs on demand without | using a state variable, I came up with an interesting design pattern. It's a | way of doing side-effecting computation outside IO. Referential transparency | is preserved by making the side effects spatial rather than temporal: by | hiding side effects behind lazy thunks in a data structure, they can be | disguised as the output of a single, apparently nondeterministic IO function | used to data structure. This lets pure code use nondeterministic computation | without the monadic plumbing required to maintain state. | | The getContents function works this way, but I came up with a more | interesting example. The code below is a source of unique integer IDs that | is modeled after the RandomGen class. It uses unsafeInterleaveIO under the | hood, preserving referential transparency but not determinism. | | It seems to work. However, I'm not entirely sure how safe my use of | unsafeInterleaveIO is. In particular, could the two branches of the tree get | CSE'd? I'm also curious what people think about the general design pattern. | | module Unique where | | import Control.Monad(liftM) | import Data.IORef | import System.IO.Unsafe | | -- The goal is to produce an infinite tree of integers where each node in the | -- tree has a unique value. | type Unique = Int | data Supply = Supply Unique Supply Supply | | -- The tree can be used in a stateful manner as a source of unique integers. | getUnique :: Supply -> (Unique, Supply) | getUnique (Supply u s1 _) = (u, s1) | | -- The tree can also be split into independent sources of unique integers. | split :: Supply -> (Supply, Supply) | split (Supply _ s1 s2) = (s1, s2) | | -- The catch is, the tree will probably be visited very sparsely, with most | of | -- it being skipped. Assigning every node its own integer is very bad, | because | -- that will waste most of the 2^32 available integers very quickly. In | fact, | -- it can get used up in just 32 calls to getUnique. | -- | -- Instead, we'll create a tree where integers magically appear only in | places | -- where they are actually used. | | -- First, we need an IO-bound supply of integers. | newtype IOSupply = IOSupply (IORef Unique) | | newIOSupply :: IO IOSupply | newIOSupply = liftM IOSupply $ newIORef 0 | | getUniqueIO :: IOSupply -> IO Unique | getUniqueIO (IOSupply s) = do | u <- readIORef s | writeIORef s $ u+1 | return u | | -- Now we'll use the IO-bound supply to create a tree having the desired | -- properties. | {-# NOINLINE getPureSupply #-} | getPureSupply :: IOSupply -> IO Supply | getPureSupply s = do | s1 <- unsafeInterleaveIO $ getPureSupply s | s2 <- unsafeInterleaveIO $ getPureSupply s | n <- unsafeInterleaveIO $ getUniqueIO s | return $ Supply n s1 s2 | | _________________________________________________________________ | Climb to the top of the charts! Play Star Shuffle: the word scramble | challenge with star power. | http://club.live.com/star_shuffle.aspx?icid=starshuffle_wlmailtextlink_oct__... | ____________________________________________ | Haskell-Cafe mailing list | Haskell-Cafe@haskell.org | http://www.haskell.org/mailman/listinfo/haskell-cafe

Simon Peyton-Jones wrote:
Good idea. GHC uses it http://darcs.haskell.org/ghc/compiler/basicTypes/UniqSupply.lhs
Lennart Augustsson and friends invented it @techreport{Augustsson92a,
... You know what would be really nice? A summary of "here are all the really cool tricks we use in the bowels of GHC and its core libraries". Like a GHC code-review for the interested haskell programmer. Maybe an introductory task for an intern who's working on ghc internals? ;) Jules

I realise belatedly that my message might have sounded dismissive. My apologies; it wasn't intended to be. Good ideas are just that: good. Reinventing them is a sign of good taste. As to documenting GHC, we try to do that by writing papers. That's easy to motivate because we get research brownie points for papers. We also put quite a bit of effort into the Commentary, but it's hard to keep up to date. The Commentary is a Wiki though, so anyone who discovers some coolness can add a description to the Wiki. Please do! Simon | -----Original Message----- | From: Jules Bean [mailto:jules@jellybean.co.uk] | Sent: 19 October 2007 17:41 | To: Simon Peyton-Jones | Cc: C Rodrigues; haskell-cafe@haskell.org | Subject: Re: [Haskell-cafe] Hiding side effects in a data structure | | Simon Peyton-Jones wrote: | > Good idea. GHC uses it | > http://darcs.haskell.org/ghc/compiler/basicTypes/UniqSupply.lhs | > | > Lennart Augustsson and friends invented it | > @techreport{Augustsson92a, | | ... | | You know what would be really nice? A summary of "here are all the | really cool tricks we use in the bowels of GHC and its core libraries". | Like a GHC code-review for the interested haskell programmer. | | Maybe an introductory task for an intern who's working on ghc internals? | ;) | | Jules

Simon Peyton-Jones
I realise belatedly that my message might have sounded dismissive. My apologies; it wasn't intended to be. Good ideas are just that: good. Reinventing them is a sign of good taste.
As to documenting GHC, we try to do that by writing papers. That's easy to motivate because we get research brownie points for papers.
One of the irritating effects of this process is not that the reports are research papers, but that they are on-line sporadically and only very rarely html. The overhead of having to download a big file (or search through one's own copies) and fire up some other viewer (for .ps or .pdf) -- or worse find a printed copy or fork out £55.97 to read something online) is a significant obstacle when all one wants to do is to is check the syntax of something or look up a short bit of code. A hyperlink of the form <a href="http://.../long-research-paper.html#interesting-paragraph"> interesting bit</a> is far more useful than one of the form <a href="http://.../long-research-paper.pdf">look for section 49.7.3</a>. It may not seem significant, but when one is attempting to learn some new part of Haskell it's really off-putting. -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk

Jon Fairbairn wrote:
A hyperlink of the form <a href="http://.../long-research-paper.html#interesting-paragraph"> interesting bit</a> is far more useful than one of the form <a href="http://.../long-research-paper.pdf">look for section 49.7.3</a>. It may not seem significant, but when one is attempting to learn some new part of Haskell it's really off-putting.
Pdfs are not that bad. You can hyper link into them too. It would look like: <a href="http://.../long-research-paper.pdf#page=45"> ... to open the pdf a position you on page 45 of it or like: <a href="http://.../long-research-paper.pdf#anchorName"> ... to open the pdf and position you on anchor anchorName You can do it from command line too: acrord32 /A page=45 long-research-paper.pdf acrord32 /A anchorName long-research-paper.pdf This of course requires the source to give you more precise link. But here there is no difference from html only ... possibly ... more people know about html linking than pdf linking. The above definitely works OK on windows, not sure about linux pdf viewers. Unfortunately I cannot find now how you can look at all anchors defined in a pdf (so that you can use something better than page=<num>). Peter.

Peter Hercek
Jon Fairbairn wrote:
A hyperlink of the form <a href="http://.../long-research-paper.html#interesting-paragraph"> interesting bit</a> is far more useful than one of the form <a href="http://.../long-research-paper.pdf">look for section 49.7.3</a>. It may not seem significant, but when one is attempting to learn some new part of Haskell it's really off-putting.
Pdfs are not that bad.
No, they (or at least links to them) typically are that bad! Mind you, as far as fragment identification is concerned, so are a lot of html pages. But even if the links do have fragment ids, pdfs still impose a significant overhead: I don't want stuff swapped out just so that I can run a pdf viewer; a web browser uses up enough resources as it is. And will Hoogle link into pdfs?
The above definitely works OK on windows, not sure about linux pdf viewers.
Works perfectly on my Fedora 7 systems. While this would be a definite improvement over having to search through the pdf, the delay and the fact that pdfs aren't as good as html for on-line viewing are still enough of an overhead that it's discouraging. If I'm using PHP (an execrable language), I can type the name (or something like the name) of any function into the search box on the PHP manual webpage and get useful (albeit often extremely irritating from a Haskell programmer's point of view) results straight back. Even including my language designer's distaste for PHP, this can make writing a wee bit of PHP a less onerous event than writing the same thing in Haskell -- definitely not what we want! -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk

Yes, htmls are better than pdfs (more lightweight, easier to work with if exact page layout is not important). I just wanted to point out that it is possible to link into some particular place of a pdf document. So the linking availability should not be the argument by itself. I would prefer html too but if pdf is required otherwise, it would be nice if link suppliers would provide more precise links. To spread the information that they can do so is the main reason I responded. Peter. Jon Fairbairn wrote:
Peter Hercek
writes: Jon Fairbairn wrote:
A hyperlink of the form <a href="http://.../long-research-paper.html#interesting-paragraph"> interesting bit</a> is far more useful than one of the form <a href="http://.../long-research-paper.pdf">look for section 49.7.3</a>. It may not seem significant, but when one is attempting to learn some new part of Haskell it's really off-putting. Pdfs are not that bad.
No, they (or at least links to them) typically are that bad! Mind you, as far as fragment identification is concerned, so are a lot of html pages. But even if the links do have fragment ids, pdfs still impose a significant overhead: I don't want stuff swapped out just so that I can run a pdf viewer; a web browser uses up enough resources as it is. And will Hoogle link into pdfs?
The above definitely works OK on windows, not sure about linux pdf viewers.
Works perfectly on my Fedora 7 systems.
While this would be a definite improvement over having to search through the pdf, the delay and the fact that pdfs aren't as good as html for on-line viewing are still enough of an overhead that it's discouraging. If I'm using PHP (an execrable language), I can type the name (or something like the name) of any function into the search box on the PHP manual webpage and get useful (albeit often extremely irritating from a Haskell programmer's point of view) results straight back. Even including my language designer's distaste for PHP, this can make writing a wee bit of PHP a less onerous event than writing the same thing in Haskell -- definitely not what we want!

On Oct 21, 2007, at 6:29 , Jon Fairbairn wrote:
No, they (or at least links to them) typically are that bad! Mind you, as far as fragment identification is concerned, so are a lot of html pages. But even if the links do have fragment ids, pdfs still impose a significant overhead: I don't want stuff swapped out just so that I can run a pdf viewer; a web browser uses up enough resources as it is. And will Hoogle link into pdfs?
I prefer HTML for online viewing and PDF for offline. BTW, you might consider a trick: look up the PDF on google, use the HTML view. This is generally poor for documents with significant graphics, but works reasonably well for most Haskell papers (modulo math I usually can't figure out anyway, lacking the background many Haskellers have in set theory / rings / groups/semigroups etc.). -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

"Brandon S. Allbery KF8NH"
On Oct 21, 2007, at 6:29 , Jon Fairbairn wrote:
No, they (or at least links to them) typically are that bad! Mind you, as far as fragment identification is concerned, so are a lot of html pages. But even if the links do have fragment ids, pdfs still impose a significant overhead: I don't want stuff swapped out just so that I can run a pdf viewer; a web browser uses up enough resources as it is. And will Hoogle link into pdfs?
I prefer HTML for online viewing and PDF for offline.
BTW, you might consider a trick: look up the PDF on google, use the HTML view.
That sort of misses my point. Given the length of time I've been involved with it, I hardly need encouragement to use Haskell, but if even I find getting to the documentation off-putting, having to know a trick to do it isn't exactly going to draw the reluctant programmer away from its bad language habits. -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk http://www.chaos.org.uk/~jf/Stuff-I-dont-want.html (updated 2007-05-07)

On Tue, Oct 23, 2007 at 10:01:37AM +0100, Jon Fairbairn wrote:
That sort of misses my point. Given the length of time I've been involved with it, I hardly need encouragement to use Haskell, but if even I find getting to the documentation off-putting, having to know a trick to do it isn't exactly going to draw the reluctant programmer away from its bad language habits.
Heh, the plethora of pdf papers on Haskell is part of what originally brought me to respect it. Something about that metafont painted cmr just makes me giddy as a grad student. A beautifully rendered type inference table is a masterful work of art. Did I mention I was a little odd? I am sure I did. John -- John Meacham - ⑆repetae.net⑆john⑈

On 10/26/07, John Meacham
Heh, the plethora of pdf papers on Haskell is part of what originally brought me to respect it. Something about that metafont painted cmr just makes me giddy as a grad student. A beautifully rendered type inference table is a masterful work of art.
Did I mention I was a little odd? I am sure I did.
http://xkcd.com/242/ Actually, yeah I kindof agree. It's quite interesting to see a language developed using rigorous, formal mathematical principles.

"Hugh Perkins"
On 10/26/07, John Meacham
wrote: Heh, the plethora of pdf papers on Haskell is part of what originally brought me to respect it. Something about that metafont painted cmr just makes me giddy as a grad student. A beautifully rendered type inference table is a masterful work of art.
Did I mention I was a little odd? I am sure I did.
Actually, yeah I kindof agree. It's quite interesting to see a language developed using rigorous, formal mathematical principles.
Lest anyone think otherwise, I most thoroughly approve of Haskell relying on rigorous foundations. I just want the foundations to be easy to inspect! -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk

On Thu, 2007-11-01 at 21:42 +0000, Jon Fairbairn wrote:
"Hugh Perkins"
writes: On 10/26/07, John Meacham
wrote: Heh, the plethora of pdf papers on Haskell is part of what originally brought me to respect it. Something about that metafont painted cmr just makes me giddy as a grad student. A beautifully rendered type inference table is a masterful work of art.
Did I mention I was a little odd? I am sure I did.
Actually, yeah I kindof agree. It's quite interesting to see a language developed using rigorous, formal mathematical principles.
Lest anyone think otherwise, I most thoroughly approve of Haskell relying on rigorous foundations. I just want the foundations to be easy to inspect!
It doesn't strike me that math in HTML is easier to inspect than PDF. jcc

On 21/10/2007, Jon Fairbairn
No, they (or at least links to them) typically are that bad! Mind you, as far as fragment identification is concerned, so are a lot of html pages. But even if the links do have fragment ids, pdfs still impose a significant overhead: I don't want stuff swapped out just so that I can run a pdf viewer; a web browser uses up enough resources as it is. And will Hoogle link into pdfs?
Swapped out!? What PDF viewer are you running on what machine? Currently, with a 552 page book open (Hatcher's algebraic topology), my PDF viewer (Evince) uses about 36MiB, which is around 3.6% of my available memory, a rather pedestrian 1 GiB. Other documents produce very similar results. The largest I was able to make it with a PDF which wasn't pathologically constructed was about 42MiB, with a PDF that had lots of diagrams. Firefox uses about twice that on an average day. If your PDF viewer uses significantly more than that, I recommend looking for a new one. ;) - Cale

"Cale Gibbard"
On 21/10/2007, Jon Fairbairn
wrote: No, they (or at least links to them) typically are that bad! Mind you, as far as fragment identification is concerned, so are a lot of html pages. But even if the links do have fragment ids, pdfs still impose a significant overhead: I don't want stuff swapped out just so that I can run a pdf viewer; a web browser uses up enough resources as it is. And will Hoogle link into pdfs?
Swapped out!? What PDF viewer are you running on what machine? Currently, with a 552 page book open (Hatcher's algebraic topology), my PDF viewer (Evince) uses about 36MiB,
If loading another 36MiB doesn't cause swapping, you're obviously not running enough haskell programmes. -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk

I've done something similar, I think. Often, I want to output some kind of progress indicator, just to show that the program is working. Typically, the program works by lazily evaluating a list (lines from an input file, say); each element of the list is wrapped with an IO action that outputs the status when evaluated -- which typically happens lazily from pure code.
countIO :: String -> String -> Int -> [a] -> IO [a] countIO msg post step xs = sequence $ map unsafeInterleaveIO ((blank >> outmsg (0::Int) >> c):cs) where (c:cs) = ct 0 xs output = hPutStr stderr blank = output ('\r':take 70 (repeat ' ')) outmsg x = output ('\r':msg++show x) >> hFlush stderr ct s ys = let (a,b) = splitAt (step-1) ys next = s+step in case b of [b1] -> map return a ++ [outmsg (s+step) >> hPutStr stderr post >> return b1] [] -> map return (init a) ++ [outmsg (s+length a) >> hPutStr stderr post >> return (last a)] _ -> map return a ++ [outmsg s >> return (head b)] ++ ct next (tail b)
-k -- If I haven't seen further, it is by standing in the footprints of giants

Some time ago, I posted this code:
countIO :: String -> String -> Int -> [a] -> IO [a] countIO msg post step xs = sequence $ map unsafeInterleaveIO ((blank >> outmsg (0::Int) >> c):cs) where (c:cs) = ct 0 xs output = hPutStr stderr blank = output ('\r':take 70 (repeat ' ')) outmsg x = output ('\r':msg++show x) >> hFlush stderr ct s ys = let (a,b) = splitAt (step-1) ys next = s+step in case b of [b1] -> map return a ++ [outmsg (s+step) >> hPutStr stderr post >> return b1] [] -> map return (init a) ++ [outmsg (s+length a) >> hPutStr stderr post >> return (last a)] _ -> map return a ++ [outmsg s >> return (head b)] ++ ct next (tail b)
It wraps a list with IO operations, so that progress can be reported while evaluating the list elements. Unfortunately, there seems to be a "stricness leak" here - and consequently, it does not work on an infinite list. I'm not sure why this happens, can anybody else see it? -k -- If I haven't seen further, it is by standing in the footprints of giants

Hello,
countIO :: String -> String -> Int -> [a] -> IO [a] countIO msg post step xs = sequence $ map unsafeInterleaveIO ((blank >> outmsg (0::Int) >> c):cs) where (c:cs) = ct 0 xs output = hPutStr stderr blank = output ('\r':take 70 (repeat ' ')) outmsg x = output ('\r':msg++show x) >> hFlush stderr ct s ys = let (a,b) = splitAt (step-1) ys next = s+step in case b of [b1] -> map return a ++ [outmsg (s+step) >> hPutStr stderr post >> return b1] [] -> map return (init a) ++ [outmsg (s+length a) >> hPutStr stderr post >> return (last a)] _ -> map return a ++ [outmsg s >> return (head b)] ++ ct next (tail b)
It wraps a list with IO operations, so that progress can be reported while evaluating the list elements. Unfortunately, there seems to be a "stricness leak" here - and consequently, it does not work on an infinite list.
Besides anything else, sequence will diverge on an infinite list. This can be seen directly from the type: sequence :: Monad m => [m a] -> m [a] It is necessary to compute all of the computations in the list before returning any of the pure resulting list. -Jeff --- This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden.

You mean for the IO monad, right? take 10 $ execWriter $ sequence $ repeat $ tell ([3]::[Int]) / Emil On 10/30/2007 02:04 PM, Jeff Polakow wrote:
Hello,
countIO :: String -> String -> Int -> [a] -> IO [a] countIO msg post step xs = sequence $ map unsafeInterleaveIO ((blank >> outmsg (0::Int) >> c):cs) where (c:cs) = ct 0 xs output = hPutStr stderr blank = output ('\r':take 70 (repeat ' ')) outmsg x = output ('\r':msg++show x) >> hFlush stderr ct s ys = let (a,b) = splitAt (step-1) ys next = s+step in case b of [b1] -> map return a ++ [outmsg (s+step) >> hPutStr stderr post >> return b1] [] -> map return (init a) ++ [outmsg (s+length a) >> hPutStr stderr post >> return (last a)] _ -> map return a ++ [outmsg s >> return (head b)] ++ ct next (tail b)
It wraps a list with IO operations, so that progress can be reported while evaluating the list elements. Unfortunately, there seems to be a "stricness leak" here - and consequently, it does not work on an infinite list.
Besides anything else, sequence will diverge on an infinite list. This can be seen directly from the type:
sequence :: Monad m => [m a] -> m [a]
It is necessary to compute all of the computations in the list before returning any of the pure resulting list.
-Jeff
---
This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden.
------------------------------------------------------------------------
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Jeff Polakow
Besides anything else, sequence will diverge on an infinite list.
Argh, of course. Thanks!
It is necessary to compute all of the computations in the list before returning any of the pure resulting list.
Replacing sequence with sequence', given as:
sequence' ms = foldr k (return []) ms where k m m' = do { x <- m; xs <- unsafeInterleaveIO m'; return (x:xs) }
seems to solve it. -k -- If I haven't seen further, it is by standing in the footprints of giants

Ketil Malde wrote:
I've done something similar, I think. Often, I want to output some kind of progress indicator, just to show that the program is working. Typically, the program works by lazily evaluating a list (lines from an input file, say); each element of the list is wrapped with an IO action that outputs the status when evaluated -- which typically happens lazily from pure code.
countIO :: String -> String -> Int -> [a] -> IO [a] countIO msg post step xs = sequence $ map unsafeInterleaveIO ((blank >> outmsg (0::Int) >> c):cs) where (c:cs) = ct 0 xs output = hPutStr stderr blank = output ('\r':take 70 (repeat ' ')) outmsg x = output ('\r':msg++show x) >> hFlush stderr ct s ys = let (a,b) = splitAt (step-1) ys next = s+step in case b of [b1] -> map return a ++ [outmsg (s+step) >> hPutStr stderr post >> return b1] [] -> map return (init a) ++ [outmsg (s+length a) >> hPutStr stderr post >> return (last a)] _ -> map return a ++ [outmsg s >> return (head b)] ++ ct next (tail b)
-k
Your use of unsafeInterleaveIO is just not quite correct. A quick series of examples: Let me define this function:
unsafeSequenceIO :: [ IO a ] -> IO [a] unsafeSequenceIO [] = return [] unsafeSequenceIO (x:xs) = unsafeInterleaveIO $ do this <- x rest <- unsafeSequenceIO xs return (this:rest)
And an infinite [ IO a ]
todo :: [ IO Int ] todo = map return [0..]
These diverge
*Main> liftM (take 10) (sequence todo) *Main> liftM (take 10) (sequence (map unsafeInterleaveIO todo))
This is finite:
*Main> liftM (take 10) (unsafeSequenceIO todo) [0,1,2,3,4,5,6,7,8,9]
An alternate definition of unsafeSequenceIO, which is not quite the same but still works, is
unsafeSequenceIO' :: [ IO a ] -> IO [a] unsafeSequenceIO' [] = return [] unsafeSequenceIO' (x:xs) = do this <- x rest <- unsafeInterleaveIO' (unsafeSequenceIO' xs) return (this:rest)
The two definitions differ only very slightly in how lazily the very first element is handled:
*Main> (unsafeSequenceIO (error "boom":[]) >> print "ok") "ok" *Main> (unsafeSequenceIO' (error "boom":[]) >> print "ok") *** Exception: boom
participants (14)
-
Brandon S. Allbery KF8NH
-
C Rodrigues
-
Cale Gibbard
-
ChrisK
-
Emil Axelsson
-
Hugh Perkins
-
Jeff Polakow
-
John Meacham
-
Jon Fairbairn
-
Jonathan Cast
-
Jules Bean
-
Ketil Malde
-
Peter Hercek
-
Simon Peyton-Jones