
On Sun, Feb 26, 2012 at 10:36 AM, Yves Parès
Hello, When I was using C code from Python, the overhead put on calling C code by Python was significant. To simplify, say I have on C-side two procedures f and g, I do all the stuff to call them in a row from Python, well I'm better off factorizing: adding on C side a wrapper h procedure that calls them both, and then call h from Python, because then I will have half as much overhead:
Instead of SwitchToC -> call f -> SwitchToPython -> SwitchToC -> call g -> SwitchToPython, the factorization leads to SwitchToC -> call f -> call g -> SwitchToPython, which gives the same result yet is different performance-wise because each switching has a cost.
This is painful, because if another time I have to call f and j (another function), then I have to make another wrapper.
In Haskell world, now, given that my functions f and g would have been imported using unsafe:
foreign import unsafe "f" f :: Thing -> Foo -> IO () foreign import unsafe "g" g :: Stuff -> Bar -> IO () foreign import unsafe "h" h :: Thing -> Foo -> Stuff -> Bar -> IO ()
Are doStuff = f x y >> g z w and doStuff = h x y z w equivalent, or is there an overhead (e.g. due to IO monad, or due to the way the FFI does the calls) when compiled (if need be with optimizations) with GHC?
I would expect the second doStuff to be more efficient, but you shouldn't care what I expect! To check what is going on you can use a tool like ghc-core (available on hackage) to see what code GHC is generating. To see the difference in performance you can use a micro benchmark tool like criterion (also on hackage) to quantify performance differences. Once you have a conclusion based on some good evidence, please report back here :) I hope that helps, Jason