The Workflow package does logging (and recovery), and you can orchestrate the process using the normal do notation. You can also inspect the log after or during the process. You can also stop and restart it.

main= exec1 "weekllystuff"  $ do
     result  <- step   mylittleprocess1
     result2 <- step  mylittleprocess2
    . ...

At the  end you have a single log with the intermediate result  (in the folder .tcachedata/workflows) . If it is interrupted at any step, re-executing the program continue in the following step.

if you use read-show instances for the intermediate result, the log will be textual and you can inspect it with a text editor.


2013/11/25 Bryan Vicknair <bryanvick@gmail.com>
Hello,

I have a bunch of little database IO functions.  Each does something to the
database, and returns a log string describing what it did, and possibly a
meaningful result from the database.

  query  :: IO (String, a)
  update :: a -> IO (String, ())

...and a few functions that orchestrate all the little functions into doing
useful work.

  syncWeek :: Week -> IO ()
  syncAll  : : IO ()


I don't want the individual functions to know what is done with the log string
describing what they did.  Top-level orchestrating functions should make that
decision, which can be one of:

1) Collect and print all to a log once all computations are done.
2) Print to stdout *as each computation is run*.
3) Ignore them.

Here is my understanding of how common monads would handle these requirements:

Writer: 1 and 3 are easy.  This is what I originally attempted to use, but I
        couldn't figure out how to accomplish #2.
Reader: 2 and 3 can be accomplished if each function reads a shouldLog config
        variable from the reader and does a putStrLn depending on the value.
        Very ugly, as now each function has to know how to log output.
State:  Not sure, but the Writer docs in the transformers package points to
        this monad as maybe solving requirement #2 above.

The use case is that when I call the top-level functions from a command line
script, I want to see logging happening in real-time to stdout, but I may call
the same top-level functions from a larger application that may be logging to
somewhere other than stdout, and may call the top-level functions from yet
another larger application which doesn't want anything to be logged.

How can I glue together a bunch of smaller computations, which may call
each other, and decide at a higher level what to do with the logging result of
each computation?  Seems like a perfect fit for Writer, except for the
requirement to be able to print to stdout at each step.


Bryan Vicknair
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



--
Alberto.