
Hi all, I have posted the following question on stackoverflow, but so far I have not received an answer. http://stackoverflow.com/questions/29039815/distributing-haskell-on-a-cluste... I have a piece of code that process files, processFiles :: [FilePath] -> (FilePath -> IO ()) -> IO () This function spawns an async process that execute an IO action. This IO action must be submitted to a cluster through a job scheduling system (e.g Slurm). Because I must use the job scheduling system, it's not possible to use cloudHaskell to distribute the closure. Instead the program writes a new *Main.hs* containing the desired computations, that is copy to the cluster node together with all the modules that main depends on and then it is executed remotely with "runhaskell Main.hs [opts]". Then the async process should ask periodically to the job scheduling system (using *threadDelay*) if the job is done. Is there a way to avoid creating a new Main? Can I serialize the IO action and execute it somehow in the node? Best, Felipe