Hey Jesper:
hrm... have you tried other compilation / ffi choices that can influence
the function call?
eg: using the "safe" rather than "unsafe" modifier?
http://www.haskell.org/haskellwiki/GHC/Using_the_FFI#Introduction
(the safe modifier doesn't seem like it matters here, but its a simple
experiment to check)
likewise, another experiment would be write your own c code wrapper around
that opengl call, and call that wrapper from haskell.
OH, and your code snippet isn't specifying *where* the glfw code is linked
from! might it be linking to the *wrong* variant of the library since
you're not specifying where the code is? You might want to set up a little
cabal configured project that specifies all of that stuff.
additionally, have you tried your code on ghc 7.6.2 ? (do this last i
suppose)
I don't know if any of these ideas are actually helpful, but if they are,
please share what you've learned.
cheers
-Carter
On Sat, Mar 16, 2013 at 9:53 PM, Jesper Särnesjö
On Thu, Mar 14, 2013 at 12:51 AM, Jesper Särnesjö
wrote: In short, I have two programs, one written in Haskell [1] and one written in C [2], that consist of calls to the same functions, in the same order, to the same C library, but which do not exhibit the same behavior. Further, the Haskell program behaves differently when compiled using GHC, and when run in GHCi. I would like to know why this is, and how to fix it.
To be clear, I think this isn't really an OpenGL problem, but rather one related to FFI or event handling. If anyone could explain to me, in general, how and why a call to a foreign function returning IO () might cause different behavior in Haskell than in C, that might help me track down the problem.
I've updated my test programs to use glGetString [3] to check which renderer is active. On my machine, it should return "NVIDIA GeForce GT 330M OpenGL Engine" if rendering happens on the discrete GPU, and "Intel HD Graphics OpenGL Engine" or "Apple Software Renderer" otherwise. These are the results of running the C and Haskell programs in various ways:
$ gcc -lglfw -framework OpenGL glfw_test.c && ./a.out NVIDIA GeForce GT 330M OpenGL Engine
$ ghc -lglfw -framework OpenGL -fforce-recomp glfw_test.hs && ./glfw_test [...] Apple Software Renderer
$ runhaskell -lglfw glfw_test.hs NVIDIA GeForce GT 330M OpenGL Engine
$ ghci -lglfw glfw_test.hs [...] Prelude Main> main NVIDIA GeForce GT 330M OpenGL Engine
The C program behaves as expected, as does the Haskell one when run using runhaskell or GHCi. Only the Haskell program compiled using GHC behaves incorrectly. Again, the OS event that signifies that a GPU switch has occurred fires either way, but for the compiled Haskell program, it fires with roughly a second's delay. Why would that be?
-- Jesper Särnesjö http://jesper.sarnesjo.org/
[1] https://gist.github.com/sarnesjo/5151894#file-glfw_test-hs [2] https://gist.github.com/sarnesjo/5151894#file-glfw_test-c [3] http://www.opengl.org/sdk/docs/man3/xhtml/glGetString.xml
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe