Hi everybody, This started out on haskell-beginners, as a question about poor performance for a Haskell program using OpenGL. Thanks to a few good suggestions there, I've managed to figure out more or less what the underlying problem is, but not its cause. In short, I have two programs, one written in Haskell [1] and one written in C [2], that consist of calls to the same functions, in the same order, to the same C library, but which do not exhibit the same behavior. Further, the Haskell program behaves differently when compiled using GHC, and when run in GHCi. I would like to know why this is, and how to fix it. A bit of background. My original program used the GLFW library to create an OpenGL rendering context, and rendered a 3D model using OpenGL. I noticed that this program had very high CPU usage, when I expected it to do most of its work on the GPU [3]. The reason for the high CPU usage turned out to be that the program was in fact using a software implementation of OpenGL [4]. My machine, a MacBook Pro running Mac OS X 10.8.2, has two GPUs, a discrete (more powerful) and an integrated one (more energy-efficient). The latter implements a larger part of OpenGL in software, and the OS is supposed to switch transparently between the two. In particular, it is supposed to switch to the discrete GPU when a program tries to use OpenGL features not supported by the integrated GPU [5]. I discovered that while this all worked as intended when running the C version of my program, it did not work quite so well for the Haskell program [6], which would get stuck with a rendering context on the integrated GPU, which in practice meant a software implementation and poor performance. I reduced both programs to fairly minimal test cases. Now, each program simply configures and creates a rendering context, checks if it is hardware-accelerated (for the configuration used, on my machine, this implies that the system has switched to the discrete GPU), and then terminates. This is all done using calls to the GLFW C library. The C program succeeds. The Haskell program fails if compiled (with GHC 7.4.2) and run, but succeeds if run in GHCi. Further, by monitoring which GPU is active, using gfxCardStatus [7] and the system console, I've established that the switch happens immediately following the execution of glfwOpenWindow, for the C program and the Haskell program when run in GHCi. For the compiled Haskell program, the switch is delayed by roughly a second. This delay appears to be what causes the program to get stuck on the integrated GPU. Now, there are a lot of moving parts involved, and many places where things could go wrong, making it tricky to even say where the problem is. Still, a Haskell program consisting entirely of foreign function calls in the IO monad, should surely behave the same as a C program consisting of the same calls? Is this caused by lazy evaluation, and if so, of what? Why does it work correctly in GHCi? I've bashed my head against this for some time now, and have run out of good ideas. I would really appreciate any input that lets me solve this, and get back to the fun parts of 3D programming. ;) To reproduce this, you'll need a Mac that has two GPUs (most of them does, these days). I run Mac OS X 10.8.2, and another user reproduced this on 10.7.5 [8]. You'll also need the GLFW library [9], which can be built from source or installed using Homebrew. Please let me know if I can provide any more information. -- Jesper Särnesjö http://jesper.sarnesjo.org/ [1] https://gist.github.com/sarnesjo/5151894#file-glfw_test-hs [2] https://gist.github.com/sarnesjo/5151894#file-glfw_test-c [3] http://www.haskell.org/pipermail/beginners/2013-March/011557.html [4] http://www.haskell.org/pipermail/beginners/2013-March/011560.html [5] http://developer.apple.com/library/mac/#qa/qa1734/_index.html%23//apple_ref/... [6] http://www.haskell.org/pipermail/beginners/2013-March/011601.html [7] http://gfx.io [8] http://www.haskell.org/pipermail/beginners/2013-March/011563.html [9] http://www.glfw.org