
Hi Tobias, (I'm completely new to GPU programming, so my question may be completely stupid or unrelated. Please be patient :-).) Some time ago I needed to perform some large-scale computations (searching for first-order logic models) and a friend told me that GPUs can be used to perform many simple computations in parallel. Could GPipe be used for such a task? I.e. to program some non-graphical, parallelized algorithm, which could be run on a GPU cluster? Thanks for your answer, Petr On Sun, Oct 04, 2009 at 08:32:56PM +0200, Tobias Bexelius wrote:
I'm proud to announce the first release of GPipe-1.0.0: A functional graphics API for programmable GPUs.
GPipe models the entire graphics pipeline in a purely functional, immutable and typesafe way. It is built on top of the programmable pipeline (i.e. non-fixed function) of OpenGL 2.1 and uses features such as vertex buffer objects (VBO's), texture objects and GLSL shader code synthetisation to create fast graphics programs. Buffers, textures and shaders are cached internally to ensure fast framerate, and GPipe is also capable of managing multiple windows and contexts. By creating your own instances of GPipes classes, it's possible to use additional datatypes on the GPU.
You'll need full OpenGL 2.1 support, including GLSL 1.20 to use GPipe. Thanks to OpenGLRaw, you may still build GPipe programs on machines lacking this support.
The package, including full documentation, can be found at: http://hackage.haskell.org/package/GPipe-1.0.0
Of course, you may also install it with: cabal install gpipe
Cheers! Tobias Bexelius
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ kolla in resten av Windows LiveT. Inte bara e-post - Windows LiveT är mycket mer än din inkorg. Mer än bara meddelanden