
On Fri, Mar 8, 2013 at 11:25 PM, Jesper Särnesjö
The program's CPU usage is quite high in general, and increases with the number of pixels redrawn per frame. It uses OpenGL 3.2, does most of its work on the GPU, and should do a constant amount of work per frame on the CPU, so this makes little sense.
I've created a smaller program (still 91 lines, though), which renders a single triangle, but still reproduces this problem. If I make the triangle smaller (by editing vertexData), CPU usage goes down, and if I make it bigger, CPU usage goes up. [1]
I've also created a C program which does the same thing (as far as possible), but which does not exhibit this problem. CPU usage is far lower, and stays constant regardless of the number of pixels redrawns per frame. [2]
I've figured out what the problem is: the Haskell program is using a software implementation of OpenGL, so rendering *does* in fact happen on the CPU. This seems to happen because I request a renderer conforming to the OpenGL 3.2 Core profile, while running on Mac OS X 10.8.2 [3]. If I remove those lines from displayOptions, the program receives a hardware-accelerated renderer. Of course, then I can't use features from OpenGL 3.2, so that's hardly a solution. It would appear that there is in fact some relevant difference between the Haskell [1] and C [2] versions of my program, or possibly some way in which the bindings from the GLFW-b package are different from the C library. I've updated my programs to check for hardware acceleration and also print the GLFW and OpenGL versions. [4] -- Jesper Särnesjö http://jesper.sarnesjo.org/ [1] https://gist.github.com/sarnesjo/5116084#file-test-hs [2] https://gist.github.com/sarnesjo/5116084#file-test-c [3] http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Concep... [4] https://gist.github.com/sarnesjo/5116084/revisions